id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/9912/hep-ph9912543.html
ar5iv
text
# Reply to comment on parton distributions, 𝑑/𝑢, and higher twist effects at high 𝑥 M. Melnitchouk et al. misunderstand the model of Frankfurt and Strikman for nuclear binding effects in deep inelastic scattering (DIS). In addition, their comment is entirely irrelevant to the results of our article. In the Frankfurt-Strikman model the main parameter used to describe the deviations of the structure functions of bound nucleons from those of free nucleons is the average kinetic energy of the nucleons in the nucleus, with small corrections due to energy binding (the parameter is $`k^2/2m+ϵ_A`$, where $`k`$ is nucleon momentum and $`ϵ_A`$ is energy binding per nucleon ($`ϵ_{A200}8MeV`$). If the value of $`x`$ is not too large (below $`x=0.75`$) the binding effects are proportional to the average value $`k^2/2m+ϵ_A`$. For heavy nuclei one can safely approximate the resulting A-dependence of the binding effects in terms of the A- dependence of $`k^2/2m`$. Note that the Fermi motion effects are also proportional to average value of $`k^2`$ in this x-range. Frankfurt and Strikman also argue that a similar pattern is valid in a wide range of models including pion models and the so called nuclear binding models where the large x depletion of the nuclear structure functions is from a reduction of the light-cone momentum fraction carried by nucleons. Hence the overall deviation from 1 of the ratio of the nuclear structure functions to that of free nucleons ($`R_A(x)1`$) is a factorized function of $`\varphi (A)f(x,Q^2)`$. An estimate of the overall scale of $`\varphi (A)`$ and the function $`f(x,Q^2)`$ can be extracted from the SLAC data on heavy targets. It is well known that in mean field nuclear models, $`k^2/2m`$ is proportional to the average nuclear density (this is also approximately valid for $`k^2/2m+ϵ_A`$). The nuclear density fit gives a good description of the SLAC data . We have used the SLAC nuclear density fit to get the effects of the nucleon binding in the deuteron on the structure functions. The SLAC fit implies that the effects in the deuteron are about 25% of the effects in iron. For comparison, Frankfurt and Strikman calculate the following ratio for the relevant quantity in iron and deuterium $`\frac{k^2_{Fe}+ϵ_{Fe}}{k^2_D+ϵ_D}5`$. From this ratio, they extract the relation $`(F_{2D}/F_{2N}1)=0.25(F_{2Fe}/F_{2D}1)`$, which is close to the value used in our paper. Therefore, although the notion of nuclear density for the deuteron may not be very well defined, the value of the nuclear density for deuterium that was used in the SLAC fit yields a similar correction for nuclear binding in the deuteron as the estimate by Frankfurt and Strikman. Note that the the uncertainty in the value of $`0.25`$ is about $`20\%`$, and that the x-dependence is well constrained by the $`A4`$ data. Figure 1 of our paper shows that the nuclear binding effects in deuterium as estimated from the nuclear density (solid line) are actually almost identical to the model of Melnitchouk and Thomas (dashed line) in the region of $`x=0.6`$. Therefore, all three models for for the nuclear binding corrections to the structure functions in the deuteron, namely the SLAC nuclear density fit, the Frankfurt and Strikman model, and the Melnitchouk and Thomas model are all in agreement with the corrections that are used in our paper at large $`x`$. The proportionality of the nuclear binding corrections to the average kinetic energy $`k^2`$ is pretty generic and hence holds in the model of Melnitchouk and Thomas as well. When one averages over nucleon momenta one ends up with a similar combination of kinetic energy and energy binding for their model also. However, the predictions from the Melnitchouk and Thomas model (which so far has not been applied to $`A4`$ nuclei) show large binding effects in deuterium at $`x=0.3`$, where there is no difference between the structure functions of iron and deuterium. At $`x=0.2`$, the model predicts binding effects in deuterium which are of opposite sign to that in iron. These strange features of the Melnitchouk-Thomas model are the reasons why we did not use that model in the extraction of neutron structure functions from deuterium data. The strange features of their model at lower x are likely to originate from the fact the energy momentum sum rule is not satisfied in their theory. A resolution of this problem and tests of the model for heavy nuclei are needed. If light nuclei like <sup>3</sup>He, <sup>3</sup>H are considered, the original formulae of Frankfurt and Strikman in terms of $`k`$ and $`ϵ`$ (with realistic models of the A=3 wave functions) should be used. Here also, there is great advantage of relating the nuclear binding effects in a light nucleus to the experimental ratio of the binding effects in iron and deuterium. Within such a framework, the energy momentum sum rules are satisfied, and the models can be used over a larger range of x.
no-problem/9912/hep-lat9912030.html
ar5iv
text
# 1 Introduction ## 1 Introduction In this talk I will briefly discuss several issues that I thought about since Ginsparg-Wilson (GW) relation and Ginsparg-Wilson-Lüscher (GWL) symmetry became popular topics in lattice field theory. Most of these issues are not resolved to my satisfaction (if at all), which actually makes them an appropriate material to discuss at a workshop like this. In lattice field theory we typically want to use some finite or countably infinite set of variables to define, as a sequence of approximations, the theory which formally involves a continuous infinity of variables. The most important guide to do this, both correctly and efficiently, are the symmetries. The dynamics of theories relevant in particle physics (such as QCD) is crucially driven by (i) Poincaré symmetry (ii) gauge symmetry, and (iii) chiral symmetries. Obviously, the lattice counterparts of these do not involve precisely the same transformations, since they act on a different set of degrees of freedom. The goal is rather to choose the discrete set of variables and the set of symmetry conditions so that the dynamics is constrained in a way analogous to that in the continuum. While this is quite non-unique, we usually stick to very definite choices. Thus, trying to account for at least some of the Poincaré invariance, the variables are usually associated with the hypercubic lattice structure, and their Euclidean dynamics is required to respect its symmetries. With gauge invariance in mind, it is most common to associate fermionic variables with sites and gauge group elements with links of the lattice, and to form actions built out of closed gauge loops or open gauge loops with fermionic variables at the ends. While in this setup it is trivial to restrict the actions further by requiring the invariance under the on-site $`\gamma _5`$ rotation (naive chiral symmetry), for well known reasons, the resulting set of actions is just too small to define the theories we want. This situation is shown in Fig. 1, where the set $`A`$ represents acceptable fermionic actions, quadratic in fermionic variables, and with “easy symmetries”. The subset $`A^L`$ of local actions is usually considered (with at least exponentially decaying couplings at large distances in arbitrary gauge background), because of the fear of non-universality in the non-local case. The problems with naive chiral symmetry are reflected by the fact that there is no intersection of the subset of symmetric actions $`A^C`$ with the subset of doubler-free actions $`A^{ND}`$ on the local side of the diagram. This is a consequence of the Nielsen-Ninomiya theorem . A possible clean resolution of this is contained in a proposition that lattice theory, for which the chirally nonsymmetric part of the propagator is local, is in virtually all important aspects as good as the one with chirally nonsymmetric part being zero . This is plausible, because the “important aspects” are typically associated with properties of fermionic correlation functions at large distances. These, in turn, depend crucially on the long distance behaviour of the propagator, hence the significance of the above property. The actions satisfying this requirement became known as GW actions, and if we represent the elements of $`A`$ by corresponding Dirac kernels $`D`$, then we have $$A^{GW}\{DA:(D^1)_N\text{is local}\}(D^1)_N\frac{1}{2}\gamma _5\{\gamma _5,D^1\}$$ GW kernels are not particularly generic. For free Wilson–Dirac operator in Fourier space we have for example $$\left(D_W^1\right)_N=\frac{\underset{\mu }{}1\mathrm{cos}p_\mu }{(_\mu 1\mathrm{cos}p_\mu )^2+_\mu \mathrm{sin}^2p_\mu }𝕀$$ where $`𝕀`$ is the identity matrix in spinor space. The second partial derivatives of the scalar function in the above expression are directional, implying that the operator is non-local. The chirally nonsymmetric part of the propagator affects the long distance physics, and chiral properties of Wilson–Dirac operator are bad. The reason why the above considerations are exciting is that it appears that fermion doubling is not a definite property of local GW actions . In other words, $`A^{GW}A^{ND}A^L\mathrm{}`$, as indicated on Fig. 1. Apart from a conjectured existence of this intersection, not much is known about the structure of the set $`A^{GW}`$. The relevant interesting questions include the following: Are there any useful definite properties of the set $`A^{GW}`$ and the set $`A^{GW}A^{ND}`$? Can we classify all GW actions by some useful characteristics? How simple can GW actions be? What is a good definition of “simple” for these actions? In what follows, I will discuss certain issues that are relevant to these kind of questions. ## 2 Non-Ultralocality of GWL Transformations There is one fully general result here that reveals the inherent property of GW actions and the nature of symmetry they share . Considering the GWL transformations $`\delta \psi =i\theta \gamma _5(𝕀RD)\psi `$, $`\delta \overline{\psi }=\overline{\psi }i\theta (𝕀DR)\gamma _5`$, such that $`[R,\gamma _5]=0`$, it is well known that the set $`A^{GW}`$ can be alternatively defined through the symmetry principle , i.e. $$A^{GW}\{DA,R\text{local}:\delta (\overline{\psi }D\psi )=0\}$$ The following result can be proved : * If $`DA^{GW}`$, then the corresponding infinitesimal GWL transformation couples variables at arbitrarily large lattice distances, except when $`R=0`$ (standard chiral symmetry). This is equivalent to non-ultralocality of $`𝒟\mathrm{\hspace{0.33em}2}RD=2(D^1)_ND`$, assigned to any $`DA^{GW}`$, except when $`DA^C`$. Ref. actually deals in detail with the physically relevant case of local elements of $`A^{GW}`$, but it is in fact true for all elements. The above theorem on “weak non-ultralocality” is represented on Fig. 2, where the set $`A^{GW}`$ is split into the parts with ultralocal GWL transformation ($`\overline{A}^U`$), and non-ultralocal GWL transformation ($`\overline{A}^{NU}`$). Since $`\overline{A}^U=A^C`$, this means that there is a sharp discontinuity in the set $`A^{GW}`$. Naively, $`A^C`$ represents a smooth limit in $`A^{GW}`$ as the chirally nonsymmetric part of the propagator completely vanishes. However, while chiral transformation only mixes variables on a single site, the nontrivial infinitesimal GWL symmetry operation requires rearrangement of infinitely many degrees of freedom (on infinite lattice). This is a necessary requirement to achieve the delicate goal of preserving chiral dynamics, while keeping doublers away. ## 3 Non-Ultralocality of GW Actions From the theorem on weak non-ultralocality it follows that, except for the subset $`A^C`$, there are no ultralocal elements of $`A^{GW}`$ for which $`R=(D^1)_N`$ is ultralocal . Apart from conceptual value, this obviously has some serious practical consequences for both perturbation theory and numerical simulations. While the above subset of $`A^{GW}`$ is the one that is usually considered in the literature, it would be of great interest to know, whether non-ultralocality of GW actions extends to the more general case. Contrary to the weak non-ultralocality, which holds even in the presence of fermion doubling, non-ultralocality of actions can only hold for doubler–free actions. This is because (at least in free case) there are infinitely many chirally nonsymmetric ultralocal GW actions with doublers, e.g. $$D(p)=\underset{\mu }{}\mathrm{sin}^2p_\mu 𝕀+i\mathrm{sin}p_\mu \gamma _\mu \left(D^1\right)_N=\frac{1}{1+_\mu \mathrm{sin}^2p_\mu }𝕀$$ Consequently, at the free level, the hypothesis of “strong non-ultralocality” can be formulated like this * HYPOTHESIS: There is no $`D(p)A`$ such that the following three requirements are satisfied simultaneously: $`D(p)`$ involves finite number of Fourier terms. $`\left(D^1(p)\right)_N`$ is analytic. $`\left(D^1(p)\right)_C`$ has no poles except if $`p_\mu =0(mod2\pi ),\mu `$. Conditions $`(\alpha \gamma )`$ represent ultralocality, GWL symmetry, and the absence of doublers. Below I will describe an algebraic problem which, I believe, holds the key to this issue. My reasoning will necessarily be terse, but the resulting problem will be stated clearly. If non-ultralocality indeed holds, it will most likely result from the clash of the two analyticity properties $`(\beta ),(\gamma )`$. I will consider the two-dimensional restrictions of the lattice Dirac operators in higher (even) dimensions, because they are already capable of capturing the required analytic structure. As a result of hypercubic symmetry, the restrictions have the form (the term proportional to $`\gamma _1\gamma _2`$ is ignored for simplicity) $$D(p)=A(p)𝕀+iB_\mu (p)\gamma _\mu D^1=\frac{A𝕀iB_\mu \gamma _\mu }{A^2+B_\mu B_\mu }$$ where $`p=(p_1,p_2)`$, $`\mu =1,2`$ and the functions $`A(p),B_\mu (p)`$ have the appropriate symmetry properties. The crucial difference between ultralocal and non-ultralocal actions is that in the former case we only have finite number of coefficients to adjust so that $`(\beta ),(\gamma )`$ are satisfied, while in the latter case there are infinitely many. This is more explicit if one makes the change of variables, such as $`x=\mathrm{sin}\frac{p_1}{2},y=\mathrm{sin}\frac{p_2}{2}`$, which does not change the analytic structure of the relevant functions. Then we are essentially dealing with polynomials. It is easy to see that requirement $`(\beta )`$ is particularly restrictive because it implies that the symmetric rational function $`R(x^2,y^2)A/(A^2+B_\mu B_\mu )`$ is analytic on the domain $`[1,1]\times [1,1]`$, while the polynomial $`A^2+B_\mu B_\mu `$ vanishes at the origin. This is only possible if the numerator and denominator have a common polynomial factor which can be canceled so that the denominator does not vanish anymore. From the structure of $`R(x^2,y^2)`$ it follows that $`A(x^2,y^2)`$ and $`B(x^2,y^2)B_\mu B_\mu `$ must each have this polynomial factor. It turns out that apart from the necessary zero at the origin, such common factors $`F(x,y)`$ tend to possess another zero in the domain $`[1,1]\times [1,1]`$, which then makes the inclusion of requirement $`(\gamma )`$ impossible. Consequently, it would be inherently useful to prove or disprove the following hypothesis: * HYPOTHESIS: Let $`G`$ be the polynomial in $`x^2,y^2`$ with complex coefficients, such that $`G(0,0)=1`$. There is no $`G`$ such that the polynomial $$B(x^2,y^2)=\mathrm{\hspace{0.33em}4}x^2(1x^2)G^2(x^2,y^2)+4y^2(1y^2)G^2(y^2,x^2)$$ can be factorized as $`B(x^2,y^2)=P(x,y)F(x,y)`$, where $`P(0,0)0`$, $`F(0,0)=0`$, and $`F(x,y)0`$ elsewhere on the domain $`[1,1]\times [1,1]`$. The above form of $`B(x^2,y^2)`$ is dictated by hypercubic symmetries. I stress that if this hypothesis is true, then it implies “strong non-ultralocality” of GW actions. On the other hand, the possible examples of ultralocal GW actions can only be built out of counterexamples to this algebraic statement. ## 4 Simple GW Actions? Assuming that GW actions indeed can not be simple in position space (non-ultralocality), one naturally asks what kind of other practically useful properties they can have. I propose to examine the possibility that GW actions can be simple in eigenspace. If the complete left-right eigenset $`\{|\varphi _L^i(U)>,|\varphi _R^i(U)>,\lambda _i(U)\}`$ exists for $`D(U)A`$, then we can represent the operator as $$D=\underset{i}{}|\varphi _R^i>\lambda _i<\varphi _L^i|$$ This representation is useful even in case of Wilson and staggered fermions, since the effects of light quarks are quickly accounted for by including only the lightest eigenmodes in the sum. The underlying idea is appealing for both generation of dynamical configurations , and for propagator technology : once the approximate eigenmode representation is computed, the resulting quark propagators can be tied together in any way desired. The group at the University of Virginia is currently actively pursuing this approach (see also the talk by T. Lippert). The eigenspace representation of the operator $`D(U)`$ is “simple”, if the corresponding eigenbasis can be calculated efficiently. Even if $`D(U)`$ is not ultralocal, there still may be a commuting ultralocal operator $`Q(U)`$ with the same eigenbasis. In this case the eigenspace representation of $`D`$ is as simple as the eigenspace representation of $`Q`$. Consequently, it would be very interesting to know whether there are local, doubler-free elements of $`A^{GW}`$, for which such an ultralocal operator $`Q`$ exists. Obvious candidates for GW actions of the above type would be the functions of the ultralocal operator $`Q`$, i.e. $`D=F(Q)`$. It is an open question whether such GW actions exist. Another simple possibility is to consider the functions $`F(Q,Q^+)`$, where $`Q`$ is the ultralocal normal operator $`[Q,Q^+]=0`$. The task of finding such actions simplifies a lot if one only considers the operators $`Q=D_0`$ representing valid, doubler-free lattice Dirac operator with $`\gamma _5`$-hermiticity ($`D_0^+=\gamma _5D_0\gamma _5`$). This is because to such $`D_0A^{ND}`$ one can directly assign a doubler-free element $`DA^{GW}`$, in a way analogous to the Neuberger construction , i.e. $$D=m_0\left[1+(D_0m_0)\frac{1}{\sqrt{(D_0m_0)^+(D_0m_0)}}\right]$$ with appropriate choice of $`m_0`$. One is thus lead to consider the following: * PROBLEM: Are there any ultralocal elements $`D_0A^{ND}`$ with $`\gamma _5`$-hermiticity that are normal? This is a beautiful problem with trivial solutions at the free level (e.g. Wilson-Dirac operator), but none are known in arbitrary gauge background. To get a flavour of what is involved here, it is useful to unmask the spinorial structure of the problem. For example, in two dimensions any operator $`DA`$ has the form $`D=A𝕀+iB_\mu \gamma _\mu +C\gamma _5`$ where $`A,B_\mu ,C`$ are gauge invariant matrices with position and gauge indices only. $`\gamma _5`$-hermiticity implies that $`A,B_\mu ,C`$ are hermitian, and in this case normality demands $$\{B_\mu ,C\}=iϵ_{\mu \nu }[B_\nu ,A]$$ (1) The challenge is to find out whether, having only finite number of gauge paths at our disposal, we can arrange for the above identities to hold. This is quite nontrivial and definite properties of $`A,B_\mu ,C`$ under hypercubic transformations represent an important constraint here. I would also like to point out that in the above language, it is easy to understand how $`\gamma _5`$-hermiticity combined with normality simplifies the algebraic structure imposed by GW symmetry. Indeed, if one identifies $`J_1A1,J_2B_1,J_3B_2`$, and $`C^2=1J_\mu J_\mu `$, then the canonical GW relation $`\{D,\gamma _5\}=D\gamma _5D`$ in two dimensions translates into $$\{J_\mu ,C\}=iϵ_{\mu \nu \rho }[J_\nu ,J_\rho ]$$ (2) Relations (1) form a subset of the above identities that are automatically satisfied if $`\gamma _5`$-hermiticity and normality are demanded. The algebraic structure (2) implied by GW relation is perhaps interesting by itself and deserves further study. Finally, I would like to introduce the lattice Dirac operator which might be of practical relevance in the context of using the eigenspace techniques in lattice QCD. Let $`\{|\varphi _L^i(U)>,|\varphi _R^i(U)>,\lambda _i(U)\}`$ be the eigenset of the Wilson-Dirac operator $`D_W(U)`$, and let $`m_0(0,2)`$. Consider the operator $$D=\underset{i}{}|\varphi _R^i>m_0\left[1+\frac{\lambda _im_0}{\sqrt{(\lambda _im_0)^{}(\lambda _im_0)}}\right]<\varphi _L^i|$$ (3) This is a well defined operator for arbitrary gauge background in which the left-right eigenbasis of $`D_W`$ exists. In trivial gauge background ($`U1`$) it coincides with the Neuberger operator, and the spectrum allways lies on a circle with radius $`m_0`$. While the locality properties of $`D`$ are questionable, the fact that it is perfectly local in the free limit suggests that non-local parts, which might be present, will be arbitrarily small on sufficiently smooth backgrounds. Even though this can certainly cause practical concerns at intermediate couplings, it would seem unlikely that there is a problem of principle as the continuum limit is approached. Operator (3) should have improved chiral properties relative to the Wilson-Dirac operator, while its computational demands in the eigenspace approach are approximately the same. The degree to which the chirally non-symmetric part of the propagator is local in nontrivial backgrounds (it is proportional to a delta function in free limit) is an open question. At the same time, however, the fact that the spectrum is forced on a circle suggests that the additive mass renormalization will be small (if any). These issues are currently under investigation. ## Acknowledgment I thank Hank Thacker and Ziad Maassarani for many pleasant discussions on the topics presented here. I would also like to express my gratitude to all the organizers of this Workshop for excellent hospitality.
no-problem/9912/hep-ex9912030.html
ar5iv
text
# 1 Introduction ## 1 Introduction ### 1.1 The Structure of a Fundamental Gauge Boson? The photon is of course the gauge boson of QED, and is as far as we know elementary. When we measure photon “structure”, what we are in fact doing is probing the quantum fluctuations of the field theory. The photon couples - via a splitting into virtual, charged fermion-antifermion pairs - into the electroweak and strong interactions. Such behaviour is an important aspect of quantum field theories, and similar phenomena arise in many different situations. For example, the gluon splitting to quarks drives the scaling violations in hadronic structure. Studies of photon structure test our understanding of this behaviour. In this talk I give an overview of the current data and phenomenology, and outline some opportunities available in the medium term future. ### 1.2 How photon structure is measured The quick answer to the question “How do you measure the structure of the photon” is unfortunately “With great difficulty”. Experiments measuring the photon structure in general use the almost on-shell photons accompanying $`e^+`$ or $`e^{}`$ beams. These photons are typically probed by some short distance process. This may be deep inelastic scattering , high transverse energy ($`E_\mathrm{t}`$) jets or particles or heavy quark production. These are in general rather challenging measurements. One key problem is the fact that the target photons have a spectrum of energies. If this is integrated over, sensitivity to the photon structure is lost. Thus, some way must be found to measure the photon energy on an event-by-event basis. Another area of difficulty is that although the presence of high $`E_\mathrm{t}`$ particles does imply the presence of a short distance scale, the exact relationship between the distance scale and $`E_\mathrm{t}`$ is not clear. The leading order processes for jet, particle and heavy quark production are shown in figure 1. In each case the virtual parton propagator probes photon at a scale related to $`E_\mathrm{t}^2`$. The diagrams in which the photon enters into the hard process directly (“direct processes”) and those in which a partonic photon structure is resolved (“resolved processes”) have comparable cross sections. At higher orders, and in real data, separation between resolved and direct is at some level arbitrary, since if the photon splits to a $`q\overline{q}`$ pair of virtuality $`\mu `$, the process could be categorized as either resolved or NLO direct, depending upon whether the factorization scale is chosen to be greater or less than $`\mu `$. Despite this, it is still possible and useful to select events in which a greater or lesser fraction of the photon’s momentum enters into the hard process, where the hard process is defined in terms of observables such as jets. Such a selection is often made on the basis of the variable $$x_\gamma ^{\mathrm{OBS}}=\frac{E_T^{\mathrm{Jet}}e^{\eta ^{jet}}}{2yE_e}=\frac{_{jets}Ep_z}{(Ep_z)_\gamma }$$ (1) which is the fraction the photon momentum entering into the jets . Direct processes have high $`x_\gamma ^{\mathrm{OBS}}`$ and resolved processes have low $`x_\gamma ^{\mathrm{OBS}}`$. Since $`x_\gamma ^{\mathrm{OBS}}`$ is a kinematic variable defined in terms of jets, it is calculable to any order in QCD and in any model desired. The other major process by which photon structure is measured is deep inelastic $`e\gamma `$ scattering (also shown in figure 1). In this case an inclusive measurement of the cross section is made as a function of the four-momentum transfer at the lepton vertex ($`Q^2`$) and/or the Bjorken scaling variables $`x`$ and $`y`$. This allows the extraction of structure functions, defined in exactly the same way as for a nucleon. Neglecting weak interactions (since for current experiments $`\mathrm{Q}^2M_W^2`$), this gives: $$\frac{d^2\sigma _{e\gamma eX}}{dxd\mathrm{Q}^2}=\frac{2\pi \alpha ^2}{xQ^4}\left[(1+(1y)^2)F_2^\gamma (x,\mathrm{Q}^2,\mathrm{P}^2)y^2F_L^\gamma (x,\mathrm{Q}^2,\mathrm{P}^2)\right]$$ (2) where if $`X=\mu ^+\mu ^{}`$ the QED structure is being probed (i.e.the number of muons ‘in’ the photon) and if $`X=`$ hadrons the QCD structure is being probed (i.e.the number of quarks ‘in’ the photon). For DIS, $`\mathrm{Q}^2\mathrm{P}^2`$, that is a “highly virtual” photon probes a less virtual photon at scale $`\mathrm{Q}^2`$. ## 2 QED Structure Function The QED structure of the photon is exactly calculable, since no strong interaction is involved. The OPAL measurement is shown in figure 2. The data are in good agreement with fundamental prediction of QED (solid line). Measurements have also been made by the other LEP experiments . Noteworthy features include the peak at high $`x`$ values, and the fact that the structure function increases with $`\mathrm{Q}^2`$. Both effects are due to the first splitting of the photon into $`\mu ^+\mu ^{}`$, a least one of which carries a large fraction of the photon energy. As $`\mathrm{Q}^2`$ increases, smaller and smaller splittings of the photon can be resolved and thus the structure function increases, even at high $`x`$. Before moving on to compare with the QCD structure, two other points should be made here. Firstly, the $`x`$ resolution is fairly good (around 0.03). This is because $`W`$, the $`\mu ^+\mu ^{}`$ mass, is well measured, and hence the target photon energy is well known. A second point is that the virtuality of the target photon has a significant effect for all $`\mathrm{Q}^2`$, even though $`\mathrm{P}^2=0.05\mathrm{GeV}^2`$. This can be seen by comparing the solid line, where the virtuality has been taken into account, with the dashed line, where the target photon has been assumed to be real. ## 3 QCD and the ‘Real’ Photon The case in which the photon splits into a $`q\overline{q}`$ pair is theoretically more complex. If the quarks have a low transverse momentum ($`k_T`$) with respect to each other, they can exist for times which are long on the time-scale of the the strong interaction. Thus, a complex partonic system can evolve, consisting of a mixture of perturbative and non-perturbative physics. In addition to the presence of non-perturbative physics in the initial state, the final state also involves long-distance hadronization processes. The QCD structure of the photon is also experimentally more problematic. Jets of hadrons are in general harder to measure than muons. This does not just affect measurements of jet or particle production - where issues such as fragmentation and ‘underlying events’ become important. It is also an issue in DIS, since the photon energy must be measured from the final state. Thus it becomes critical to contain as much of the hadronic system as possible in the detector, and vital to have the best possible estimates of the structure of this final state to allow estimates of the resolution and acceptance to be reliably made . These effects mean that there is potentially large model dependence in any extraction of photon structure. Dealing with this model dependence is a major issue for the experiments. Best practice is to separate the measurement as much as possible from interpretation. Thus measurements of cross sections are made with the minimal model dependence. These cross sections are defined in terms of the real final state (hadrons rather than partons!) and the kinematic regions in which they are measured are dictated by detector acceptance. Because of this, they are unfortunately often hard to interpret and compare with each other or with fundamental QCD. These cross sections are sensitive to many important effects, not only the photon structure, but that of the proton (in particular the gluons at $`x10^2`$ at HERA), as well as $`\alpha _s`$, low-$`x`$ QCD, hadronization and “underlying” events. There is not much benefit in being sensitive to all these at the same time, and thus the role of phenomenology is critical to isolate as far as possible different effects. Several programs allowing flexible calculation of the processes in QCD at NLO are available and are crucial in the ongoing effort to extract fundamental physics from a wide range of data. Equally important are general purpose Monte Carlo simulations which include models of hadronization and underlying events as well as (typically) LO QCD and parton showers. These allow us to build a consistent picture of these processes over a large data set from several experiments, and also allow improvements in our understanding to be fed back into modelling of detector acceptance, leading to improved measurements. The potential rewards are very high, and these are exciting times for those involved. However, it is very much “work in progress”, and the following represents a snapshot of an evolving field. See these references for more details. ### 3.1 QCD Structure Function The QCD structure of the photon has been measured for samples with $`0.24\mathrm{GeV}^2<\mathrm{Q}^2<400\mathrm{GeV}^2`$ . An example of the more recent data at low $`x`$ and intermediate $`\mathrm{Q}^2`$ is shown in figure 3a, from OPAL and L3. Earlier PLUTO data are also shown. The data are compared to curves from the GRV group . In the GRV model of the proton, the rise in $`F_2^{\mathrm{proton}}`$ is generated by the DGLAP evolution . The same behaviour is expected in the photon parton distributions at similar $`x`$, and can be seen in the curves. Unfortunately the data do not yet have the reach or accuracy to determine whether or not such a rise occurs. However, improved statistics and smaller systematic uncertainties are expected before the end of LEP running. It should be noted that L3 sometimes display their data without showing an estimate of the model dependence, preferring instead to show a series of data sets each corrected according to a different model. Here, these different data sets have been used to estimate the systematic uncertainty . Figure 3b shows preliminary measurements at a higher $`\mathrm{Q}^2`$ from ALEPH. Since these are plotted on a linear scale in $`x`$, they can easily be compared to the QED structure functions of figure 2. Whilst at low $`x`$ the gluon splitting means that the structure function is expected to rise like that of the proton, at higher $`x`$ the photon to two fermion splitting dominates and the behaviour is similar for the QED and QCD structure functions. Also clearly seen in the data is the expected positive scaling violation at all $`x`$, driven by photon splitting at high $`x`$ and (like the proton) by the gluons at low $`x`$. The data are summarized in figure 4. ### 3.2 QCD and the Real Photon at HERA Since at HERA the photon is probed by a parton from the proton, HERA does not measure $`F_2^\gamma `$. The HERA equivalent of $`F_2^\gamma `$ is a jet cross section. This has the major disadvantage that hadronization, as well as choice of jet definition, plays a role. An advantage, however, is that the gluon distribution in the photon enters directly in the cross section at leading order. A further advantage is that due to the fact that the CM frame is boosted strongly in the proton direction, the photon remnant tends to open out and be relatively well measured in the detector. In addition, both ZEUS and H1 have small angle taggers, which allow the photon energy to be inferred from the electron energy. These effects mean that the target photon energy is better measured than at LEP. The measured cross sections may be compared to NLO pQCD calculations, which take a photon parton distribution function (PDF) as input. If the jets have high enough transverse energy ($`E_T^{\mathrm{Jet}}`$) the hadronization corrections are expected to be at the level of a few percent. The probing scale is something of the order of $`E_T^{\mathrm{Jet}}`$. The latest ZEUS preliminary data are shown in figure 5 for differential cross sections defined as in but now measured above a variety of $`E_T^{\mathrm{Jet}}`$ thresholds, increasing the hard scale. The data are compared to a calculation using the AFG-HO Photon PDF . The high $`x_\gamma ^{\mathrm{OBS}}`$ data is in excellent agreement with the theory. However, in the region including both high and low $`x_\gamma ^{\mathrm{OBS}}`$ data, there is a discrepancy particularly in the forward region, where the lowest values of $`x_\gamma ^{\mathrm{OBS}}`$ are probed. This discrepancy shows no sign of dying away with increasing $`E_T^{\mathrm{Jet}}`$ even though hadronization effects are estimated to be small at these values. The potential of such data may be illustrated by assuming LO QCD & MC models, and estimating a “parton level cross section”, from which an effective parton density can be extracted. This exercise has been performed by H1, both in the case of jets and charged particles and the result is shown in figure 6a. The rise in the gluon distribution, which will drive the rise in $`F_2^\gamma `$ at lower $`x`$, is clearly in the region of HERA sensitivity, although the model dependence in the data is now large. The physics of hard scattering in photon-proton collisions is far from trivial. The photon remnant is an interesting feature of the final state. Some of its properties have been measured by ZEUS . In particular it was measured to have an average transverse momentum $`p_T=2.1\pm 0.2`$ GeV w.r.t. photon direction. These measurement have now been extended by H1, measuring the remnant as a function of $`E_T^{\mathrm{Jet}}`$ for photoproduction and for virtual photons ($`1.4<\mathrm{P}^2<25\mathrm{GeV}^2`$). These results, shown in figure 6b, are consistent ZEUS result. Importantly, the behaviour of the photon remnant is critical for the $`x`$ resolution at LEP, since it determines how much hadronic energy escapes down the beam-pipe. HERWIG (shown in the figure) does a reasonable job, and such distributions are used to constrain the models employed at LEP, thus reducing the systematic errors. The fact that the photon has a dual nature - behaving either as a hadron or a point-like particle - allows several interesting QCD studies to be made. A recent measurement is that of the three-jet distributions. The QCD dynamics of the three jet system is sensitive to the colour of the incoming partons. In figure 7, $`\theta _3`$, the angle between the highest energy jet and the proton beam direction (defined as in ), is shown, and compared to $`𝒪(\alpha \alpha _s^2)`$ QCD and to LO MC simulation, in which the third jet comes from the parton shower. There is a change in shape of the distribution as $`x_\gamma ^{\mathrm{OBS}}`$ increases and the mix of incoming resolved and direct photons changes. Charged particle distributions are also sensitive to photon PDFs. They also require non-perturbative input in the form of a fragmentation function, but once this is taken into account, there are no hadronization uncertainties as such. However, there is still sensitivity to the modelling of the underlying event and the choice of hard scale (see figure 8). In addition to these processes there are prompt photon data , as well as measurements of jet shapes and sub-jets at HERA and LEP , all of which have the power to reduce the uncertainties in the final state and in the theory, if taken together. Given this enormous data set, of increasing accuracy and scope, the time is right to do a serious QCD fit to the HERA and LEP data! In the final sections of this presentation is describe briefly two areas of photon structure studies. Both are relatively new, and both offer new and possibly simpler ways to investigate the underlying physics. ## 4 Charm and the Real Photon Charm photoproduction has been measured at both HERA and LEP . If it is assumed that the charm mass is sufficiently high that perturbative QCD is applicable, then the ‘charm content’ of the photon is expected to be a totally perturbatively calculable parton distribution. In fact if the factorization scale is taken lower than the charm mass, charm production takes place entirely within the hard process and there is no charm content to the photon. The data are beginning to address whether this picture is sufficient. Charm is typically tagged in the $`D^{}`$ channel and recent measurements of $`D^{}`$ cross sections are shown in figure 9. The agreement with theory is reasonable. However, the theory lies somewhat below the data in the forward (proton) direction in the HERA data. If jets are measured, $`x_\gamma ^{\mathrm{OBS}}`$ can be calculated and we can begin to examine the production mechanism in more detail. The $`x_\gamma ^{\mathrm{OBS}}`$ distributions from ZEUS and OPAL are shown in figure 10 where at least one jet contains a $`D^{}`$. Comparison to the LO Monte Carlo shows that direct and resolved processes are both needed. This is still true even when the data is compared to NLO QCD . The photon PDF used in the calculation contains no charm, but events can be generated at low $`x_\gamma ^{\mathrm{OBS}}`$ where a third jet plays the role of the photon remnant. The resolved processes are suppressed relative to direct in comparison to the non charm-tagged case, but the cross section is still significant at low $`x_\gamma ^{\mathrm{OBS}}`$. Beauty in photoproduction has also been seen now at both HERA and LEP . ## 5 Virtual Photon Structure? All current studies of the structure of the photon in fact study photons with a finite (if small) virtuality. Nevertheless, there has been a marked discontinuity in the terminology and methodology used to describe the photon as a propagator (in electron-proton DIS for example) and the photon as a target. This is despite the fact the $`ep`$ “DIS” experiments extend well down below 1 GeV in $`\mathrm{Q}^2`$, where the term “deep” inelastic scattering is arguably inappropriate, and also despite the fact that, as seen in the QED structure results (figure 2), the effect of target photon virtualities is significant even in so-called “photoproduction” experiments. This situation is changing, and a significant amount of attention is now being paid both by theory and experiment to the fact that there must be a continuum between $`\mathrm{P}^2=0`$ and $`\mathrm{P}^2\mathrm{Q}^2`$. There exist some expectations as to how the transition between the two might take place. With respect to direct photon processes, the expectation is that the perturbative part of the photon structure will fall like $`\mathrm{ln}(\mathrm{Q}^2/\mathrm{P}^2)`$ whilst the non-perturbative (“Vector Meson”) part should fall something like $`m_v^2/(m_v^2+\mathrm{P}^2)`$, where $`m_v`$ is a the mass of the vector meson state into which the photon may fluctuate. When the photon virtuality gets large, but remains much less than the probing scale (which may be set by high $`E_T^{\mathrm{Jet}}`$ jets, for example), the non-perturbative part vanishes and once more we obtain a perturbatively calculable parton distribution, which suggests possibilities for the measurement of $`\alpha _s`$ and an improved understanding of QCD radiation and hadronic structure. Look first at the HERA data, some of which is shown in figure 11 . In figure 11a, H1 dijet data $`d\sigma /dx_\gamma ^{\mathrm{OBS}}`$ is shown in a grid in which the probing scale ($`(E_T^{\mathrm{Jet}})^2`$) increases from left to right, whilst the target scale (the photon virtuality $`\mathrm{P}^2`$, here labelled $`\mathrm{Q}^2`$ according to the convention at HERA). Concentrate for instance on the second row. The target photon virtuality is in the range $`3.5\mathrm{GeV}^2<\mathrm{P}^2<8.0\mathrm{GeV}^2`$, certainly far from zero, and yet the population of events at low $`x_\gamma ^{\mathrm{OBS}}`$ is significant. The LO QCD plus parton showers simulation can only successfully model the distribution by appealing to a virtual photon structure ansatz in which the expectations above are implemented. A similar effect is observed in the ZEUS measurement (figure 11b) where the ratio of the high to low $`x_\gamma ^{\mathrm{OBS}}`$ cross sections is plotted, now extending all the way down to the “almost real” photons previously studied. The ratio falls rapidly. However, at the lowest target virtualities it is higher than the expectations shown, and even by $`4\mathrm{GeV}^2`$ it remains higher than the expectation of a DIS Monte Carlo which contains no photon structure. The blue line is a “straw man” model in which the GRV real photon PDF has been used in virtual photons without modification. It is not expected to be valid here, but the fact that it is completely flat demonstrates graphically that the observed fall in the data is genuinely due to suppression of the photon structure, and not to any subtle phase space effect. The Schuler and Sjöstrand parton distribution function (red curve) contains a model for the virtual photon structure which a suppression with increasing virtuality. It is interesting to note that whilst both curves lie below the data at the lowest virtuality, the SaS prediction falls more slowly than the data and thus there is agreement at the higher virtualities. The discrepancy at low virtuality (and at jets $`E_T^{\mathrm{Jet}}`$ of around 6 GeV, where these data lie) has been observed before and attributed to the effect of a so-called “underlying event”, possibly generated by multiparton interactions. Such effects are not included in the curves shown here. Since models of underlying events rely upon the hadronic nature of the photon, it is natural that any discrepancy due to them should fall as the hadronic component is suppressed. Measurements of virtual photon structure have also been made in $`e^+e^{}`$ experiments . Both the early PLUTO data and the more recent LEP data are consistent with being flat with $`\mathrm{P}^2`$, but are also consistent with the expected fall. There is also a relation between virtual photon structure and low-$`x`$ physics: Two virtual photons in collision is as near as we are likely to get to a “golden” process in which the total cross section is expected to be governed by a pomeron (multi-gluon colour-singlet exchange) calculable in perturbative QCD according the the BFKL resummation. Such processes have been measured at LEP. Both leptons are tagged and so there is a good measurement of both photon virtualities. In contrast to the previous situations, these virtualities are now selected to be of comparable size. The idea is that there should be a large evolution in $`x`$ (actually in rapidity), and that the high virtualities mean that non-perturbative effects should be small. These are the conditions in which the BFKL resummation of $`\mathrm{ln}(1/x)`$ terms should be applicable. Although the measurements so far are above the naive two-gluon exchange calculation, they are also consistent with the model encoded in PHOJET. There is a large uncertainty in the actual prediction of BFKL, and a conclusive test has yet to be made. This field is developing rapidly in both theory and experiment. ## 6 Summary This is a field in which lots of new data has appeared over the past two years, and more is expected soon. The new results from LEP and HERA demonstrate the improvements being made in understanding of hadronic initial and final states, using the photon as a flexible test case. This has been made possible by the emergence of several new theoretical tools, including better general purpose simulations, implementations of virtual photon PDFs, and NLO QCD calculations which allow realistic kinematic cuts to be applied. Such efforts are proving critical in extracting fundamental physics from the data. The final word from LEP, LEP2 and pre-upgrade HERA will be a series of measurements with much reduced systematic uncertainties over a very wide kinematic range. The data and theoretical tools are now in place for a comprehensive analysis of photon structure along the lines of those carried out for the proton. To challenge our ideas about QCD structure in a second hadron-like object, with the expected differences and similarities described in this review, is a great opportunity and promises to set the essential technology of reliable QCD calculations on a significantly firmer footing. In the slightly longer term future, charm and beauty photoproduction will be a boom area at HERA after the upgrade, as both the luminosity and the ability of the detectors to tag heavy flavours should increase markedly. I believe that the curious nature of the photon, in which by making judicious selection we can turn on or off its “hadronic structure” is an enormously valuable tool for understanding hadronic initial and final states in general, a topic of increasing importance across the breadth of particle physics. I am very glad to acknowledge to all the hard work involved on the LEP and HERA experiments, as well as the clearly written papers for EPS and particularly all the lively discussions with many people at Photon99 - I’m looking forward to Photon2000. Extra thanks are due to Richard Nisius for several of the summary plots.
no-problem/9912/astro-ph9912039.html
ar5iv
text
# Surprises of Phase Transition Astrophysics ## Abstract It is a half-century-long story about the first-order phase transition effects on equilibrium, stability and pulsations of planets and stars. The topics more or less considered in author’s papers are mainly touched, and no attempt was done to cover all literature on subject. It was Ramsey who in 1950 had first shown in MNRAS paper that $`if`$ in the center of a planet, with central pressure $`P_c`$ just reaching critical pressure $`P_0`$, the first-order phase transition takes place with density jump from $`\rho _1`$ to $`\rho _2=q\rho _1`$, and $`if`$ $`q>1.5`$, then planet loses its stability at the same moment. This remarkably amazing and quite unexpected result has been since then rediscovered by many authors repeatedly - last time in nucl-th/9902033. ”…- D’you think all these (cloths) will be the wear? \- I think all these should be tailored.” Yuri Levitanski, Soviet Jewish poet The story began to me some 35 years ago when trying to understand paper , I’ve found that something is wrong with Mass-Central Density curves for white dwarf stars near maximum - there should be $`sharp`$, not $`smooth`$ maximum of mass, as reverse $`\beta `$-decay reactions lead to the discontinuity of density distribution inside the star. To simplify the problem, I used some models, allowing $`analytical`$ investigation - polytropes with indices $`n`$ equal to $`0`$ and $`1`$ in envelope and core in various combinations. In all cases considered, it happened that $`if`$ $`q>1.5`$, the instability occurs at the moment the phase transition starts in the center of the star. My supervisor acad. Ya.B. Zeldovich for a long time did not believe in such $`\mathrm{"}strange\mathrm{"}`$ figure $`3/2`$, so I kept analyzing various models. After a while Ya.B. became himself convinced and within a brief period he developed an amazingly elegant method of prooving this word constant 3/2, and a joint paper was submitted to Astronom. Zhurnal. Needless to say, I was happy that I was right and that I had joint paper with such an outstanding scientist. Catastrophe came short: Ya.B. sent me a brief letter noting that ”the result 3/2 is known in literature!”. No comments… I’ve never managed since then to write another joint paper with Ya.B…. Why $`3/2`$? I do not know, but it seems that $`3/2number`$ is directly related to the $`r^1`$ \- law of potential $`U(r)`$ in Newtonian Theory of Gravitation (NTG) in $`3D`$ space. This is not the last surprise of Phase-Transition Astrophysics (PTA)! In the classical theory of equilibrium and stability of stars comprised of matter with $`smooth`$ Equations of State (EoS), there is a common sense that rotation leads to the increase of stability reserve while General Relativity (GR) leads to the decrease of stability. And you may guess that exactly $`opposite`$ situation is in PTA. In GR, the critical value of $`energy`$ density $`q=\epsilon _2/\epsilon _1`$ is : $`3/2(1+P_0/\epsilon _1),`$ that is $`larger`$ than in NTG. Why $`larger\mathrm{?}`$ \- I do not know… As to rotation, it was found , that for steady-state rotation with $`small`$ angular velocity $`\mathrm{\Omega }`$, $`q_{crit}=3/2\mathrm{\Omega }^2/\mathrm{\hspace{0.33em}4}\pi G\rho _1`$, that is, of course, rotation $`reduces`$ the stability of star $`against`$ phase-transition-induced instability. Why $`reduces\mathrm{?}`$ \- I do not know… Abovementioned three suprises of PTA, combined in the formula $$q_{crit}=3/2\mathrm{\Omega }^2/\mathrm{\hspace{0.33em}4}\pi G\rho _1+3/2(1+P_0/\rho _1c^2),$$ are valid for $`any`$ EoS’s of old and new phases. A number of amazing results was found analysing various particular models. First to be mentioned is the two-constant-density-phase model with First-Order Phase Transition (PT1) at the boundary between core and envelope. This model was extensively used for different problems - general dependence of Mass-Radius etc. curves, effects of rotation and GR, neutral core (when PT1 begins at some distance from the center of a star) etc. In the last case, it was found that $`q_{crit}`$ is an increasing function of size of neutral core: $`q_{crit}=1+k/[34x(k1)x^4],`$ where $`x`$ is $`relative`$ radius of neutral core, and $`k`$ is relation of density in neutral core to density in envelope; and $`q_{crit}\mathrm{}`$ at $`xx_{cr}(k)`$, where for example at $`k=1`$, $`x_{cr}=3/4`$ \- another amazing value. At $`larger`$ neutral cores, PT1 with arbitrary large $`q`$ can not force a star to lose its dignity and stability! For a model with polytropic indices $`n=1`$ both in envelope and neutral core , $`x_{cr}=.6824`$ (corresponding relative mass of core is .6375). Returning to $`n=0`$, for $`q>3/2`$ there is another critical point in $`MassP_c`$ curves, where at the minimum of mass the $`recovery`$ of stability takes place, and for larger $`P_c`$ there is a branch of stable equilibrium states. At $`theminimumofMass`$: $$f(q,x)=(q1)^2x^4+\mathrm{\hspace{0.33em}4}(q1)x+(32q)=0.$$ (1) It was found that GR effects, in $`thefirstPostNewtonianapproximation`$ lead to the PN-correction to $`x`$ in Eq. (1): $$\mathrm{\Delta }_{PN}(q,x)=\frac{97q+27(q1)x+(q1)(4q27)x^2+(q1)(94q)x^3}{2(q1)[1+(q1)x^3]^3}\frac{P_0}{\rho _1c^2}.$$ (2) The surprises have not finished - this correction is $`negative`$ at small values of $`q`$ and $`positive`$ at $`q>1.89`$. That is for larger $`q`$, GR effect is of $`correct`$ sign and $`reduces`$ the region of stability at $`(qx)`$ plane. For the same $`n=0`$ model, the analytical formula for frequency, $`\omega `$, of small adiabatic radial pulsations of the lowest mode can be found: $$\omega _0^2=\frac{4\pi G\rho _1f(q,x)}{3(q1)(1x)},$$ (3) and in the case of $`slow`$ rotation with angular velocity $`\mathrm{\Omega }`$: $$\omega _\mathrm{\Omega }^2=\omega _0^2+\mathrm{\Delta }_\mathrm{\Omega }(q,x),\text{with}$$ $$\mathrm{\Delta }_\mathrm{\Omega }(q,x)=\frac{2}{3}\mathrm{\Omega }^2\left[\frac{5x(1x)(1+x)^2}{1+(q1)x^5}\frac{1+(q1)x}{(q1)(1x)}\right].$$ (4) Rotational correction to frequency squared is $`negative`$ at smaller values of $`x`$, that is rotation reduces the stability reserve of star with PT1, then for larger value of $`x`$, rotational effect is of $`\mathrm{"}correct\mathrm{"}`$ sign. In fact, dependence of both corrections, due to GR and rotation, on $`x`$ and $`q`$ is rather complicated, and Fig. 1 presents only a part of $`(qx)`$plane with lines on which $`\mathrm{\Delta }_\mathrm{\Omega }(q,x)=0`$ (broken line labeled as ”Rot”)and $`\mathrm{\Delta }_{PN}(q,x)=0`$ (dash-point line labeled as ”GR”). Also shown is the curve $`f(q,x)=0`$ (solid line labeled as ”Crit”) from Eq.1, which marks a boundary between $`stable`$ equilibrium states (right-hand) from $`unstable`$ ones (left-hand). Remarkably, this curve crosses both curves of $`zerocorrection`$. Due to GR, curve $`f(q,x)=0`$ is forced to rotate clockwise around point of intersection of lines ”GR” and ”Crit”, while rotation makes the critical curve rotate counterclockwise around point of intersection of lines ”Rot” and ”Crit”. Why do rotation and GR exert such strange effects on stability of a star with PT1? \- I do not know… A lot of interesting and unsolved problems of pulsations (eigen- values and functions, damping), strong rotation with regard to deviation of equilibrium figures from spherical symmetry, considering more EoS, etc. have been left aside , but 1500-word mark is close and I pass to the epilogue… Most of this story happened to me many years ago, in the Soviet Union, the Power so unexpectedly crushed in the recent past, as if some first-order phase transition had acted in the center of It… Now I’m in Israel, all my papers being left over there and these lines being written by heart, by memory, with no paper at hand… And last sentences (hopefully still in 1500-word limit), returning to epigraph: \- Do I believe this all will be awarded? \- I think this all should be written. Some bibliographical remarks: 1. W.H. Ramsey, MNRAS 110 (1950) 325; 113 (1951) 427; see also M.J. Lighthill, MNRAS 110 (1950) 339 ($`n=0`$ model), W.C. De Markus, Astron. J. 59 (1954) 116 (rotation). 2. T. Hamada, E.E.Salpeter, Ap.J. 134 (1961) 669; see also E. Schatzman, Bull. Acad. Roy. Belgique 37 (1951) 599; E. Schatzman, White Dwarfs, North-Holland Publ. Co. Amsterdam, 1958. 3. Z.F. Seidov, Izv. Akad. Nauk Azerb. SSR, ser. fiz-tekh. matem. no. 5 (1968) 93 (n=0); Soobsh. Shemakha Astrophys. Observ. 5 (1970) 58 (n=1); Izv. Akad. Nauk Azerb. SSR, ser. fiz-tekh. matem. no. 1-2 (1970) 128 (n=0 with neutral core); Izv. Akad. Nauk Azerb. SSR, ser. fiz-tekh. matem. no. 6 (1969) 79 (n=1 with neutral core). Ph.D thesis. Yerevan Univ. 1970. 4. Ya.B. Zeldovich, Z.F. Seidov (1966) unpublished; see also Z.F. Seidov, Astrofizika 3 (1967) 189 . 5. Ya.B. Zeldovich, I.D. Novikov, Relativistic Astrophysics, Univ. Chicago Press, Chicago, 1971 (this is only one of many possible references to these outstanding authors). 6. Z.F. Seidov, Astron. Zh. 48 (1971) 443 (GR); see also B. Kämpfer, Phys.Lett. 101B (1981) 366. 7. Z.F. Seidov, Astrofizika 6 (1970) 521 (rotation). 8. Z.F. Seidov, Space Research Inte Preprint (1984) Pr-889 (GR). 9. M.A. Grienfeld, Doklady Acad. Nauk SSSR 262 (1982) 1342 (pulsations); see also G.S. Bisnovatyi-Kogan, Z.F. Seidov, Astrofizika 21 (1984) 570. 10. Z.F. Seidov, Doctor of Sci. Theses. Space Research Inte, Moscow, 1984. 11. Z.F. Seidov, astro-ph/9907136 (non-1/r potential and PT1).
no-problem/9912/hep-ph9912346.html
ar5iv
text
# References BUTP-99/30 Report of Working Group on Electromagnetic Corrections <sup>1</sup><sup>1</sup>1Talk given at Eighth International Symposium on Meson-Nucleon Physics and the Structure of the Nucleon (MENU99), Zuoz, Engadine, Switzerland, 15 - 21 August 1999. A. Rusetsky Institute for Theoretical Physics, University of Bern, Sidlerstrasse 5, CH-3012, Bern, Switzerland, Bogoliubov Laboratory of Theoretical Physics, Joint Institute for Nuclear Research, 141980 Dubna, Russia and HEPI, Tbilisi State University, 380086 Tbilisi, Georgia ## Abstract The talks delivered by M. Knecht, H. Neufeld, V.E. Lyubovitskij, A. Rusetsky and J. Soto during the session of the working group of electromagnetic corrections to hadronic processes at the Eight International Symposium MENU99, cover a wide range of problems. In particular, those include: construction of the effective Lagrangians that then are used for the evaluation of electromagnetic corrections to the decays of $`K`$ mesons; evaluation of some of the low-energy constants in these Lagrangians, using sum rules and the large-$`N_c`$ arguments; complete calculations of electromagnetic corrections to the $`\pi \pi `$ scattering amplitude at $`O(e^2p^2)`$; the general theory of electromagnetic bound states of hadrons in the Standard Model. The problem of unique disentangling of strong and electromagnetic interactions in hadronic transitions has been a long-standing challenge for theorists. All data obtained in high-energy physics experiments, contain a highly nontrivial interplay of strong and electromagnetic effects, with a huge difference in the interaction ranges. In addition, there are the isospin-breaking effects caused by the difference in quark masses that generally has a non-electromagnetic origin. In the analysis of the experimental data, however, one would prefer to unambiguously subtract all isospin-breaking corrections from the hadronic characteristics in order to obtain the quantities defined in ”pure QCD”, at equal quark masses – the case where the most of the theoretical predictions are done. In the context of $`\pi N`$ scattering problem, the issue of the electromagnetic corrections has been extensively studied by using the dispersion relations , and the potential model . The closely related problem of electromagnetic corrections to the energy-level shift and decay width to the pionic hydrogen and pionic deuterium was analyzed again by using the potential model . Both above approaches are based upon the certain assumptions about the precise mechanism of incorporation of electromagnetic effects in the strong sector, provided that the details of strong interactions are unknown. It remains, however, unclear, how to estimate the systematic error caused by these assumptions or, in other words, how to consistently include within these approaches all isospin-breaking effects which are present in the Standard Model. Given the fact, that the above approaches are used for the partial-wave analysis of $`\pi N`$ scattering data and the measurements of $`\pi ^{}p`$, $`\pi ^{}d`$ atom characteristics that result in the independent determinations of values of the $`S`$-wave $`\pi N`$ scattering lengths, the consistent treatment of the isospin-breaking effects might be helpful for understanding the discrepancies between the results of different analyses. Further, the problem of the systematic treatment of electromagnetic corrections is crucial for the analysis of experimental data on the decays of $`K`$ mesons. In particular, the study of $`K_{l4}`$ decays that at present time is performed by E865 collaboration at BNL , and by KLOE at DA$`\mathrm{\Phi }`$NE facility (LNF-INFN) , will enable one to measure the parameters of the low-energy $`\pi \pi `$ interaction and thus provide an extremely valuable information about the nature of strong interactions at low energy. Preliminary results are so far obtained in the negligence of the electromagnetic effects which might significantly affect the amplitudes in the threshold region. Last but not least, a complete inclusion of electromagnetic corrections is needed in order to fully exploit the high-precision data on hadronic atoms provided by the DIRAC collaboration at CERN ($`\pi ^+\pi ^{}`$), DEAR collaboration at DA$`\mathrm{\Phi }`$NE ($`K^{}p`$, $`K^{}d`$) , by the experiments at PSI ($`\pi ^{}p`$, $`\pi ^{}d`$), KEK ($`K^{}p`$), etc. These experiments, that allow for the direct determination of the hadronic scattering lengths from the measured characteristics of hadronic atoms: level energies and decay widths, contribute significantly to our knowledge of the properties of QCD in the low-energy regime. In particular, the measurement of the difference of the $`S`$-wave $`\pi \pi `$ scattering lengths $`a_0a_2`$ by the DIRAC experiment will allow one to distinguish between the large/small condensate scenarios of chiral symmetry breaking in QCD: should it turn out that the measured value of $`a_0a_2`$ differs from the prediction of standard ChPT , one has to conclude that the symmetry breaking in QCD proceeds differently from the standard picture. Further, the $`\pi N`$ scattering length $`a_{0+}^{}`$ that is measured by the experiments on pionic hydrogen and pionic deuterium, can be used as an input to determine the $`\pi NN`$ coupling constant, and from the precise knowledge of $`KN`$ scattering lengths one might extract the new information about e.g. the kaon-sigma term and the strangeness content of the nucleon. According to the modern point of view, the low-energy interactions of Goldstone bosons (pions, kaons…) in QCD can be consistently described by using the language of effective chiral Lagrangians. The amplitudes of the processes involving these particles below $`1\mathrm{GeV}`$ can be systematically expanded in series over the external momenta of Goldstone particles and light quark masses. All nonperturbative QCD dynamics is than contained in the so-called low-energy constants of the effective Lagrangian, and only a finite number of those contribute at a given order in this expansion. These constants should be, in principle, calculable from QCD. However, at the present stage they are considered to be free parameters to be determined from the fit to the experimental data . The approach can be generalized to the sector with baryon number equal to $`1`$ or $`2`$ . Moreover, the approach allows for the systematic inclusion of electromagnetic interactions – albeit at the cost of the increased number of low-energy constants in the effective Lagrangian . These new ”electromagnetic” low-energy constants describe the high-energy processes corresponding to the direct interaction of quarks with photons, and thus contribute the missing piece to the potential-type models – in the latter, generally, only the long-range part of the electromagnetic interactions, corresponding to the photon exchange between different hadrons, is taken into account. From the size of the effect coming from the ”electromagnetic” low-energy constants, along with other missing sources of electromagnetic corrections (see below), one can therefore have a judgment on the systematic error caused by the choice of the potential-type models. From the above, it seems that the most nontrivial task that one encounters in the systematic treatment of the electromagnetic interactions in the low-energy hadronic processes, is related to the determination of the precise values of the ”electromagnetic” low-energy constants in the effective low-energy Lagrangians that, at the present time, are rather poorly known. Different methods have been used to this end so far. The resonance saturation method was used to evaluate these constants in the $`O(e^2p^2)`$ Lagrangian with Goldsone bosons. Further, in Ref. it was demonstrated that these constants can be expressed as a convolution of a QCD correlation function with the photon propagator, plus a contribution from the QED counterterms. The sum rules were then used to evaluate these constants . Somewhat different approach to the calculation of these constants was used in . The results of these approaches do not all agree. One important lesson, however, can be immediately drawn: since these low-energy constants turn out to be dependent on the QCD scale $`\mu _0`$ that should be introduced in the QCD Lagrangian after taking into account the electromagnetic corrections , the naive separation of the isospin-breaking effects into the ”electromagnetic” and ”strong” parts, the latter corresponding to the difference of quark masses, does not hold in general and can be carried out only in certain observables at a certain chiral order. Generally, both these parts are dependent on QCD scale $`\mu _0`$ that then cancels in the sum. For this reason, hereafter, we would prefer to speak about the isospin-breaking corrections to the physical observables, rather then consider individual contributions to it. The session of the working group of electromagnetic corrections at MENU99 symposium was designed to cover all important steps of the treatment of isospin-breaking corrections on the basis of modern effective theories: starting from the construction of the effective Lagrangians containing non-QCD degrees of freedom – photons and leptons, and from the examples of the determination of some of the non-QCD low-energy constants, using sum rules and large $`N_c`$ arguments, to the actual application of the framework to the calculation of physical quantities: $`P_{l2}`$ and $`P\mathrm{}^+\mathrm{}^{}`$ decay rates, $`\pi \pi `$ scattering amplitudes, as well as observables of $`\pi ^+\pi ^{}`$ and $`\pi ^{}p`$ hadronic atoms – energy levels and decay characteristics. A construction of a low-energy effective field theory which allows the full treatment of isospin-breaking effects in semileptonic weak interactions, was discussed in the talk of H. Neufeld (see also for more details). In addition to the pseudoscalars and the photon, also the light leptons were included as dynamical degrees of freedom in an appropriate chiral Lagrangian: only within such a framework, one will have full control over all possible isospin breaking effects in the analysis of new high statistics $`K_\mathrm{}4`$ experiments by the E865 and KLOE collaborations. The same methods are also necessary for the interpretation of forthcoming high precision experiments on other semileptonic decays like $`K_\mathrm{}3`$, etc. The one-loop functional in the presence of leptonic sources was evaluated by using the superheat kernel technique . At next-to-leading order, the list of low-energy constants has to be enlarged as compared to the case of QCD+photons. If only the terms at most quadratic in lepton fields, and at most linear in Fermi coupling $`G_F`$ are considered, the number of additional low-energy constants $`X_i,i=1\mathrm{}7`$ is equal to $`7`$. Further, regarding ”pure” lepton or photon bilinears as ”trivial”, only three from the remaining five low-energy constants will contribute to realistic physical processes. One may therefore conclude that the inclusion of virtual leptons in chiral perturbation theory proceeds at a rather moderate cost. As an immediate application of the formalism, a full set of electromagnetic corrections to $`P_{l2}`$ decays were calculated. It was demonstrated that, in some specific combinations of widths of different decay processes, the ”new” low-energy constants cancel, thus leading to the predictions at the one-loop order that do not depend on the parameters $`X_i`$. A full calculation of the electromagnetic corrections to the $`K_{l3},K_{l4}`$ processes is in progress. As it is evident, the question of actual evaluation of a large number of low-energy constants lies at a heart of the successful application of the effective Lagrangian approach. In the talk by M. Knecht, the evaluation of one of such low-energy constants that contributes to the decays of pseudoscalars into lepton pairs, was given on the basis of sum rules and large $`N_c`$ arguments (for more details, see ). In the large $`N_c`$ limit, the QCD spectrum consists of a tower of infinitely narrow resonances in each channel. Employing further the so-called Lowest Meson Dominance (LDM) approximation that implies the truncation of the infinite sum over resonances in the sum rules, one can relate the low-energy constants to the known parameters of the low-lying resonances. It was demonstrated that using the value of the particular low-energy constant determined in the LDM approximation leads to the values of ratio of branching ratios $`Br(P\mathrm{}^+\mathrm{}^{})/Br(P\gamma \gamma )`$ for the processes $`\pi ^0e^+e^{}`$ and $`\eta \mu ^+\mu ^{}`$ that are consistent with the present experimental data. The predictions for the same value in the process $`\eta e^+e^{}`$ were also given. As another application of the effective theories, in the talk by M. Knecht (see also ) a complete calculation of the electromagnetic corrections to the $`\pi ^0\pi ^0\pi ^0\pi ^0`$ and $`\pi ^+\pi ^{}\pi ^0\pi ^0`$ scattering amplitudes at $`O(e^2p^2)`$ has been performed. The latter case is particularly interesting since the the results can be directly translated into the corrections to the decay width of the $`\pi ^+\pi ^{}`$ atom. It turns out that the size of isospin-breaking corrections to the $`\pi \pi `$ scattering amplitudes is of the same order of magnitude that the two-loop strong corrections and, therefore, can not be neglected. Bound states of hadrons in chiral effective theories – hadronic atoms – were considered in the talks by J. Soto, V. Lyubovitskij and A. Rusetsky . It was demonstrated that, a systematic evaluation of isospin-breaking corrections to the observable characteristics of this sort of bound systems is possible order by order in ChPT. To this end, a nonrelativistic effective Lagrangian approach was used, that provides the necessary bridge between the bound-state characteristics and the scattering $`S`$-matrix elements in a most elegant and economical manner: at the end, according to the matching condition of relativistic and nonrelativistic theories, these characteristics are expressed in terms of the scattering matrix elements calculated in the relativistic theory, and to the latter one can apply the conventional machinery of ChPT. The talk by J. Soto (details – in Ref. ) was focused on the foundations of the nonrelativistic effective Lagrangian approach as applied to the $`\pi ^+\pi ^{}`$ atom decay problem. Different scales relevant for the problem of interest have been thoroughly discussed and disentangled. It has been demonstrated that, matching the parameters of the nonrelativistic Lagrangian to the relativistic theory, it is possible to evaluate the decay width of the $`\pi ^+\pi ^{}`$ atom order by order in ChPT. The nonrelativistic effective Lagrangian approach was applied to the $`\pi ^+\pi ^{}`$ atom decay problem as well in the talk by V. Lyubovitskij (see for more details). It was demonstrated, however, that employing the different technique for matching of relativistic and nonrelativistic theories, it is possible to obtain the general expression for the decay width of the $`\pi ^+\pi ^{}`$ atom in the first nonleading order in isospin breaking, without an explicit use of chiral expansion – that is, the result is valid in all orders in ChPT. At order $`O(e^2p^2)`$ in ChPT, one may use the results of calculations by Knecht and Urech – in this manner, it was demonstrated that the one-loop corrections to the decay width are rather small, including the uncertainty coming from the ”strong” and ”electromagnetic” low-energy constants. The bulk of total correction is given already by the tree-level diagram that is free from this uncertainty. The results given in the talk by V. Lyubovitskij, finalize the treatment of the $`\pi ^+\pi ^{}`$ atom decay problem: the width is now known to a sufficient accuracy that allows one to fully exploit the future precision data from DIRAC experiment at CERN. In the talk by A. Rusetsky the nonrelativistic effective Lagrangian approach was applied to the calculation of the energy of the ground state of the $`\pi ^{}p`$ atom. At this example, one can fully acknowledge the might and flexibility of the nonrelativistic approach: the spin-dependent part of the problem trivializes, and the treatment proceeds very similarly to the case of the $`\pi ^+\pi ^{}`$ atom. However, the isospin-breaking piece of the relativistic scattering amplitude in the $`\pi ^{}p`$ case already at the tree level (order $`p^2`$) contains both ”strong” and ”electromagnetic” low-energy constants. This part of the isospin-symmetry breaking effect is missing in the potential model . From the explicit expression of the $`\pi ^{}p`$ scattering amplitude at $`O(p^2)`$, one can immediately identify two distinct sources of isospin-breaking corrections that are not present in the potential model. The terms that depend on the quark masses in the strong part of the $`\pi ^{}p`$ amplitude, contribute to the isospin-breaking piece – this contribution is proportional to charged and neutral pion mass difference. In addition, the direct quark-photon interaction that is encoded in the ”electromagnetic” low-energy constants, also contributes to the isospin breaking. Uncertainty introduced by the poor knowledge of these low-energy constants is, unlike the $`\pi ^+\pi ^{}`$ case, much larger as compared to the estimate based on the potential model – thus, the latter considerably underestimates the systematic error in the analysis of the pionic hydrogen data provided by the experiment at PSI. To summarize, I shall briefly dwell on the developments that are foreseen in the nearest future. $``$ A complete treatment of the isospin-breaking corrections to $`K_{l3}`$ and $`K_{l4}`$ decays on the basis of modern effective field theories that are now in progress, is of a great importance for the analysis of precision data samples from E865 (BNL) and KLOE (LNF-INFN) experiments. $``$ For the systematic treatment of the isospin-breaking corrections, the knowledge of the precise values of the low-energy constants that enter the effective Lagrangian, is necessary. In particular, this concerns the values of ”electromagnetic” low-energy constants which are poorly known. Activities based on sum rules, resonance saturation models, etc. may provide an extremely useful information in this respect. $``$ At present, the problem of $`\pi ^+\pi ^{}`$ atom decay is completely understood, both conceptually and numerically. The expected high-precision data from DIRAC experiment, therefore, can be used to determine the difference of the $`\pi \pi `$ $`S`$-wave scattering lengths $`a_0a_2`$ quite accurately. In the contrary, the issue of the isospin-breaking corrections to the observables of pionic hydrogen (and pionic deuterium) needs to be further investigated. The uncertainty due to the poor knowledge of low-energy constants in the isospin-breaking part of the $`\pi N`$ scattering amplitude is large already at tree level, and the evaluation of one-loop contributions are in progress. These studies will be even more important for the future improved experiment at PSI that intends to measure the $`S`$-wave $`\pi N`$ scattering lengths, using the data from the pionic hydrogen alone. A detailed investigation of the properties of the kaonic atoms which will be studied by the DEAR collaboration at DA$`\mathrm{\Phi }`$NE, is planned. $``$ The success of the nonrelativistic Lagrangian approach to the hadronic atom problem clearly demonstrates that the formalism of the potential model based on Schrödinger-type equations, can be applied to the evaluation of the isospin-breaking corrections, provided the potential contains a full content of isospin-symmetry breaking effects of the Standard Model. Therefore, it will be extremely important to set up a systematic framework for the derivation of the potentials from the effective field-theoretical Lagrangians – then, the already existing machinery of the potential model can be directly exploited. Acknowledgments. I would like to thank the organizers of MENU99 symposium for their large effort, that created an atmosphere of intense discussions and learning, and J. Gasser for reading the manuscript. This work was supported in part by the Swiss National Science Foundation, and by TMR, BBW-Contract No. 97.0131 and EC-Contract No. ERBFMRX-CT980169 (EURODA$`\mathrm{\Phi }`$NE).
no-problem/9912/cond-mat9912255.html
ar5iv
text
# Universal Fluctuations in Correlated Systems \[ ## Abstract The probability density function (PDF) of a global measure in a large class of highly correlated systems has been suggested to be of the same functional form. Here, we identify the analytical form of the PDF of one such measure, the order parameter in the low temperature phase of the 2D-XY model. We demonstrate that this function describes the fluctuations of global quantities in other correlated, equilibrium and non-equilibrium systems. These include a coupled rotor model, Ising and percolation models, models of forest fires, sand-piles, avalanches and granular media in a self organized critical state. We discuss the relationship with both Gaussian and extremal statistics. PACS numbers: 05.40, 05.65, 47.27, 68.35.Rh \] Self similarity is an important feature of the natural world. It arises in strongly correlated many body systems when fluctuations over all scales from a microscopic length $`a`$ to a diverging correlation length $`\xi `$ lead to the appearence of “anomalous dimension” and fractal properties. However, even in an ideal world the divergence of of $`\xi `$ must ultimately be cut off by a macroscopic length $`L`$, allowing the definition of a range of scales between $`a`$ and $`L`$, over which the anomalous behaviour can occur. Such systems are found, for example, in critical phenomena, in Self-Organized Criticality or in turbulent flow problems. By analogy with fluid mechanics we shall call these finite size critical systems “inertial systems” and the range of scales between $`a`$ and $`L`$ the “inertial range”. One of the anomalous statistical properties of inertial systems is that, whatever their size, they can never be divided into mesoscopic regions that are statistically independent. As a result they do not satisfy the basic criterion of the central limit theorem and one should not necessarily expect global, or spatially averaged quantities to have Gaussian fluctuations about the mean value. In Ref. (BHP) it was demonstrated that two of these systems, a model of finite size critical behaviour and a steady state in a closed turbulent flow experiment, share the same non-Gaussian probability distribution function (PDF) for fluctuations of global quantities. Consequently it was proposed that these two systems - so utterly dissimilar in regards to their microscopic details - share the same statistics simply because they are critical. If this is the case, one should then be able to describe turbulence as a finite-size critical phenomenon, with an effective “universality class”. As, however, turbulence and the magnetic model are very unlikely to share the same universality class, it was implied that the differences that separate critical phenomena into universality classes represent at most a minor perturbation on the functional form of the PDF. In this paper, to test this proposition, we determine the functional form of the BHP fluctuation spectrum and show that it indeed applies to a large class of inertial systems . The magnetic model studied by BHP, the spin wave limit to the two dimensional XY (2D-XY) model, is defined by the harmonic Hamiltonian $`H=J{\displaystyle \underset{i,j}{}}\left[1{\displaystyle \frac{1}{2}}\left(\theta _i\theta _j\right)^2\right]`$ (1) where $`J`$ is the near neighbour exchange constant for angular variables $`\theta _i`$ that occupy a square lattice with periodic boundary conditions. The magnetization is defined as $`m=1/N_i\mathrm{cos}(\theta _i\overline{\theta })`$, where $`\overline{\theta }`$ is the instantaneous mean orientation. This model is critical at all temperatures and for an infinite system has algebraic correlations on all length scales. In the finite system the lattice constant $`a`$ and the system sizes $`L=a\sqrt{N}`$ define a natural inertial range. The model can be diagonalized in Fourier space, which makes it very convenient for analytical work. The PDF of the magnetization $`P(m)`$ can be expressed as the Fourier transform of a sum over its moments. In Ref. it was shown that the moments are given by $`\mu _n=g_n(g_2/2)^{n/2}\sigma ^n`$, where $`\sigma ^2`$ is the variance and the $`g_k(k=2,3,4\mathrm{})`$ are sums related to the lattice Green function in Fourier space $`G(𝐪)`$: $`g_k=_𝐪G(𝐪)^k/N^k`$. The fact that $`\mu _n\mu _1^n`$ means that a change of $`N`$ or $`T`$ is equivalent to a linear transformation of the variate $`m`$; hence, the PDF can be expressed in a universal form. As shown in the moment series can be resummed to give the following expression, exact to leading order in $`N`$: $`P(y)={\displaystyle _{\mathrm{}}^+\mathrm{}}{\displaystyle \frac{dx}{2\pi \sigma }}\mathrm{exp}\left[iyx+{\displaystyle \underset{k=2}{\overset{\mathrm{}}{}}}{\displaystyle \frac{g_k}{2k}}\left(ix\sqrt{{\displaystyle \frac{2}{g_2}}}\right)^k\right].`$ (2) Here $`y=(mm)/\sigma `$ and $`m`$ is the mean of the distribution. Including only $`g_2`$ in (2) would give a Gaussian PDF with variance $`\sigma ^2`$. However the terms for $`k>2`$ cannot be neglected and $`\mathrm{\Pi }(y)=\sigma P(y)`$ is a non-Gaussian, universal function, independent of both the size of the system and the temperature. Without loss of generality one can make the quadratic approximation $`m=1_i(\theta _i\overline{\theta })^2/2N`$ which allows us to transform Eqn.(2) to a form suitable for numerical integration : $`\mathrm{\Pi }(y)={\displaystyle _{\mathrm{}}^+\mathrm{}}\sqrt{{\displaystyle \frac{g_2}{2}}}{\displaystyle \frac{dx}{2\pi }}\mathrm{exp}\left[i\mathrm{\Phi }(x)\right]`$ (3) $`i\mathrm{\Phi }(x)=ixy\sqrt{{\displaystyle \frac{g_2}{2}}}i{\displaystyle \frac{x}{2}}\mathrm{Tr}G/N{\displaystyle \frac{1}{2}}\mathrm{Tr}\mathrm{log}\left(\mathrm{𝟏}ixG/N\right)`$ (4) (Here the trace Tr of any function of $`G`$ is defined as the sum for $`𝐪\mathrm{𝟎}`$ of the same function of $`G(𝐪)`$, which can be simply proved by making a Taylor expansion of the function near $`G=0`$.) In order to make an accurate test of this expression we have performed a high resolution molecular dynamics simulation of $`P(m)`$. Fig. 1 compares the integrated equation 2 with data for a system of $`1024`$ classical rotors integrated over $`10^8`$ molecular dynamics time steps in the low temperature phase. The agreement is globally excellent, particularly in the wings of the distribution and along the exponential tail for fluctuations below the mean. The asymptotic values of $`\mathrm{\Pi }(y)`$ are related to the saddle points of the integrand in (3). We find $`\mathrm{\Pi }(y)|y|\mathrm{exp}({\displaystyle \frac{\pi }{2}}by);\mathrm{for}y1`$ (5) $`\mathrm{\Pi }(y)\mathrm{exp}({\displaystyle \frac{\pi }{2}}e^{b(ys)});\mathrm{for}y1`$ (6) where $`b=\frac{1}{8\pi }\sqrt{g_2/2}1.105`$ and $`s=0.745`$. These forms give the correct asymptotic gradients of the molecular dynamics data on logarithmic and double logarithmic scales. The asymptotic forms are an accurate approximation to Eqn. (2) for large $`|y|`$; however deviations from the asymptotes are important over most of the physical range of $`y`$, which is typically limited to $`\mathrm{log}|y|O(1)`$. Equations (5) and (6) serve as a guide to finding a good approximation to the functional form of $`\mathrm{\Pi }(y)`$ in this range. To do this, we observe that the factor of $`y`$ in (5) can be regarded as constant in this regime, which along with (5) and (6) immediately suggests the form $$\mathrm{\Pi }(y)=K\left(e^{xe^x}\right)^a;x=b(ys),a=\pi /2.$$ (7) This function must obey the three conditions of unit area, zero mean and unit variance, which fixes $`b`$, $`s`$ and $`K`$ to values slightly different to those found analytically: $`b=0.938`$, $`s=0.374`$, $`K=2.14`$. An alternative approach is to choose the parameters in the generalized function $`Ne^{a(b(ys)e^{b(ys)})}`$ such that the first four Fourier coefficients match Eqn.(2). In this case we find $`a=1.58`$, $`K=2.16`$, $`b=0.934`$, $`s=0.373`$, in satisfying agreement with the previous estimates. The ratios of the higher order Fourier coefficients differ from unity only very slowly, showing that Eqn. (7) is an accurate approximation to $`\mathrm{\Pi }(y)`$. This is directly confirmed by plotting Eqn. (7) versus the molecular dynamics and exact results in Fig. 1, where the fit is seen to be of extremely high precision. We now test the idea that the BHP fluctuation spectrum of the form of Eqn. 7 is exhibited by many types of inertial system. Fig. 2 shows the numerically simulated PDF of global quantities in several equilibrium and non-equilibrium models. The equilibrium models include the 2D Ising model at a temperature $`T^{}(N)`$ just below the critical temperature and a 2D site percolation model on a square lattice for a site occupation probability $`P^{}(N)`$ just above the percolation transition. The numerical results refer to the fluctuations of the absolute value of the magnetization and the fluctuations in the size of the spanning cluster respectively. The non-equilibrium models are of the type that when driven slowly enter a scale free or critical steady state defined as Self-Organized Criticality (SOC) . Here the global quantity is essentially a dissipation rate that fluctuates about a well defined mean value in the steady state. Details of the individual SOC models are as follows. (1) The auto igniting forest fire model consists of “trees” planted at random on the vacant sites of a square lattice with probability $`p`$. In each time step the age $`T_i`$ of a tree on site $`i`$ is incremented by one unit. When $`T_i=T_{max}`$ the tree ignites and $`T_i`$ is reset to zero. Trees can also catch fire by being nearest neighbour to a site on fire. The energy, or wood stored in a tree, is proportional to $`T`$, and the figure shows the PDF of the total energy dissipated in fires at each time step. (2) In the Bak-Tang-Wiesenfeld (BTW) sandpile model a dynamical variable $`E_i`$ is defined on lattice site $`i`$. The model is driven by adding units of the $`E`$-field to randomly selected sites. When $`E_i>E_{max}`$ the site variable is decreased by $`E_{max}`$ and the $`E`$-variable of the $`z`$ neighbour sites are increased by $`E_{max}/z`$. One or more of the neighbour sites may then aquire an $`E`$-value larger than $`E_{max}`$ and an avalanche is induced. The PDF shown refers to the fluctuations in the instantaneous number of relaxing sites. (3) In the Sneppen depinning model an interface moves through a static random field of pinning forces. The site along the interface that experiences the smallest pinning force is moved one unit ahead. If the local slope $`s_i`$ exceeds $`1`$, then the neighbouring sites are moved one unit ahead until all $`s_i1`$. We call such a sequence of updates a micro-avalanche and calculate the PDF of the sum of areas covered by the progressing interface during an integral time scale $`T`$ dependent on system size. (4) The model for granular media is a ‘Tetris-like’ 2D lattice gas ensemble of anisotropic particles settling under gravity in a finite box . Due to the geometrical frustration, the total mass varies from one realization of the filling process to another. The PDF for fluctuations in bulk density of the particles is shown. Refering to Fig. 2, the data sets for all models fall close to the BHP form, Eqn. (7). In the equilibrium models (lower curves) the self similarity is expected at the system-size dependent critical temperature $`T^{}(N)`$ or percolation probability $`P^{}(N)`$ only. The PDF for the 2D Ising model, for example, is temperature dependent, but makes a close approach to the BHP form around $`T^{}(N)`$. We believe that the remaining deviations for fluctuations above the mean are due to the limited inertial range for the system sizes studied. For the non-equilibrium systems (upper curves) some of the data sets also show some deviation. This may simply be due to poor statistics, as the deviations on the two sides of the mean are related by the constraints of normalisation. In this respect, we note that extremely good statistics were required to get a really satisfactory fit to Eqn. (7) for the 2D-XY model, while for more limited data sets systematic deviations occurred. Nevertheless it is clear that, to leading order, Eqn. (7) correctly gives the behaviour of the global fluctuation spectrum in all these systems, independently of the details of the each example. We propose that this is a consequence of the systems sharing the properties of finite size, strong correlations and self similarity. To clarify this proposition, we return to our calculation on the 2D-XY model. One can see explicitly that the BHP spectrum occurs through appearance of anomalous dimension and contributions on all length scales of the intertial range. The magnetization can be written, within the quadratic approximation, $`m=1\mathrm{\Sigma }_𝐪m_𝐪`$, where the $`m_𝐪`$s are the amplitudes from the individual spin wave modes. These are statistically independent positive variates with PDF $$P(m_q)=\sqrt{\frac{\beta q^2N}{4\pi }}m_q^{1/2}\mathrm{exp}(\beta Nq^2m_q),$$ (8) whose mean and standard deviation therefore scale with $`q^2`$. The “softest” modes have wave vector $`q=2\pi /L`$ and hence, by themselves, make contributions of $`O(1)`$ to $`m`$, while the modes on the zone boundary with $`q=\pi /a`$ have only microscopic amplitude. The moments of $`P(m)`$ are determined by the mean magnetization. This is proportional to the integral over all contributions: $`_{2\pi /L}^{\pi /a}q^2n(q)𝑑q`$ where $`n(q)q^{d1}`$ is the density of states. In one dimension the integral depends only on the lower limit $`2\pi /L`$ and only the soft modes count, while in three dimensions only the upper limit $`\pi /a`$ is important and the multitude of modes near the zone boundary dominate the sum. In two-dimensions however, both limits of the integral are required and a detailed calculation gives $`m=1\eta /2\mathrm{log}(CL/a)`$, with $`C=1.87`$ and critical exponent $`\eta =T/2\pi J`$. The relevance of fluctuations over all length scales of the zone therefore leads to the “anomalous” term $`\mathrm{log}(L/a)`$ and it ensures that the system cannot be cut into statistically independent parts. The spinwave approximation to the XY-model is exactly equivalent to the Edwards-Wilkinson (EW) model of a growing interface in steady state , with the square of the interface width $`w`$ equal to the sum over the amplitudes $`m_q`$: $`w^2=_qm_q`$. The fluctuations in the width of the interface has been studied by Foltin et al. for the 1D case and by Rácz and Plischke for the 2D interface. The BHP spectrum is found for the critical two dimensional case only and our calculation can be considered the completion of the study in Ref. . The functional form of Eqn. (7) suggests a relationship to Gumbel’s first asymptote for extreme value statistics, which have recently been discussed in relation to turbulence in one dimension. The form (7) but with $`a`$ taking integer values, where $`a=1,2,3\mathrm{}`$ would correspond to the PDF for the first, second, third…. largest of the $`N`$ random numbers. However, the exponent $`a=\pi /2`$ suggests, as we have argued, that the fluctuations in $`m`$ are not dominated by single independent variables. Rather, the analytic derivation of Eqn. (7) shows that if extreme value statistics are involved they must be related to the statistics of some emergent coherent collective excitation of the system. This is born out in the simulations of the Ising model and of all the SOC models studied. In the Ising model, it is found that both the full magnetization and the contribution to the magnetisation from the largest connected cluster of parallel spins give the same PDF, within numerical error. For the Sneppen model the PDF of the sum over avalanches and that of the largest avalanche during the integral time $`T`$, are both found to be of the BHP form, even though these quantities are not related by a simple scale. If the avalanches appearing during time $`T`$ were uncorrelated one would expect that the PDF for the largest avalanche would be Gumbel’s asymptote with $`a=1`$. The modification towards our form indicates therefore that there are correlations between events during the period $`T`$. To test this idea, we have studied the PDF of the extreme values taken from sets of linearly correlated variables. The process consists of generating a vector $`\stackrel{}{\chi }=(\chi _1,\mathrm{}\chi _N)`$ where $`\chi _i`$ are all independent and exponentially distributed. The maximum signal is obtained as $`\xi _{max}=\mathrm{max}\{\xi _1,\mathrm{},\xi _N\}`$ where the vector $`\stackrel{}{\xi }=𝐌\stackrel{}{\chi }`$ and $`𝐌`$ is an $`N\times N`$ matrix with random but fixed elements. The resulting spectrum, the central data set in Fig. 2, is found to be very close to the BHP spectrum. In conclusion, our results infer that the non-Gaussian PDF of a global quantity in a critical system is a consequence of finite size, strong correlations and self similarity, and is independent of universality class to leading order. Clearly many more studies of this point are required. Non-linearity does not appear to be an essential feature, over and above the necessity, in a closed system, to couple the elementary degrees of freedom. Indeed, if non-linearity were essential, it would seem impossible that the linear spin-wave theory could capture the fluctuations in the turbulence experiment . Rácz and Plischke have studied a series of linear and non-linear models for growing interfaces. All show anisotropic PDFs for the interface width, with long tails in qualitative agreement with our data. It would very interesting to examine these models in detail to see how, in a controlled environment the strength of the non-linearity affects the form of the fluctuations. Finally, it seems that a relationship exists between the BHP curve and extremal statistics. Although we have shown that the BHP behavior is not simply due to extreme values of the statistically independent degrees of freedom of the 2D-XY model, extreme values do appear to dominate the real space coherent structures that are excited in the critical (Ising model) or self-organized critical (Sneppen model) state. Our findings thus establish a completely new and general consequence of self similarity and they open the door to numerous studies that could lead to a unified global description of aspects of equilibrium and non-equilibrium behaviour. It is a pleasure to thank P. Sinha-Ray for supplying the auto igniting forest fire model data. This work was supported by the Ministère de l’Education et de la Recherche No. ACI-2226(2), HJJ and SL are supported by the British EPSRC, JL and MS (ERBFMBICT983561) are supported by the European Commission.
no-problem/9912/astro-ph9912391.html
ar5iv
text
# The Supernova Relic Neutrino Background ## I Introduction A Type II supernova (SN $`\mathrm{II}`$ ) – the explosion triggered by the gravitational collapse of a single massive star – emits 99% of its energy in neutrinos. The relic $`\overline{\nu }_\mathrm{e}`$ background created by all past SN $`\mathrm{II}`$ is potentially detectable in present and/or future large underground neutrino detectors. One of the goals of the current SuperK and SNO detectors is to detect this SRN background . The predicted SN $`\mathrm{II}`$ relic $`\overline{\nu }_\mathrm{e}`$ (SRN) flux depends crucially on the SN $`\mathrm{II}`$ rate as a function of redshift and the epoch of maximum SN rate (throughout the SN rate refers to the comoving densities of the SN $`\mathrm{II}`$ rate) is important in determining the detectability of the SRN background. For example, if the SN rate should peak at redshifts of order 2 to 3, the majority of the SRNs would be redshifted to energies below typical detector threshold energies ($``$ 5 MeV). Since the same objects which are responsible for creating the SRN background are also responsible for the bulk of the heavy element production, knowledge of the SN $`\mathrm{II}`$ metal and neutrino production, in concert with the observationally inferred metal enrichment history of the Universe, is the straightest path (i.e., least model dependent) to predicting the flux and spectrum of relic $`\overline{\nu }_\mathrm{e}`$ from all past SN $`\mathrm{II}`$ <sup>*</sup><sup>*</sup>*Type I supernovae are not expected to contribute appreciably to the relic neutrino background.. It is our goal here to follow this path in providing a generous upper bound to the expected SRN background. Furthermore, we account for the characteristics of the SuperK and SNO detectors, and use our calculated (upper bound to the) SRN background to predict (upper bounds to) the event rates at these detectors. These event rates are compared to expected backgrounds and to current limits. The current upper limit (from the Kamiokande II detector) on the flux of supernova relic $`\overline{\nu }_\mathrm{e}`$ in the energy interval from 19 to 35 MeV is 226 $`\mathrm{cm}^2\mathrm{sec}^1`$. SNO is just beginning operation and has not decided upon a neutron detection strategy which is vital to the detection of $`\overline{\nu }_\mathrm{e}`$. In the last few years progress has been made in constraining the recent star formation history of the Universe. In particular, a variety of observational evidence seem consistent with a comoving star formation rate (SFR) density which was much higher at redshifts $`z1`$ than at the present epoch. The history of star formation beyond $`z1`$ is less certain and it is not yet clear if the Universal SFR declined rapidly or evolved only mildly at higher redshifts. Support for this scenario comes from Pei & Fall who have used chemical evolution models to explore the SFR and metal enrichment history inferred from observations of damped Ly$`\alpha `$ systems. They find that the observed H $`\mathrm{I}`$ column densities may not represent the true column densities because of significant corrections due to dust. Since the Ly$`\alpha `$ systems are identified from the spectra of quasars and since the Ly$`\alpha `$ systems may contain dust, the implication is that some of the quasars may be invisible. This, in turn, suggests that some Ly$`\alpha `$ systems may go undetected. When Pei & Fall correct for the effects of dust obscuration, they find evidence for rapid star formation at low redshifts ($`z1`$). This is to be compared to the predicted peak star formation rate epoch at redshifts of order 3 – 4 when obscuration is not taken into account. In particular, for their model with infall Pei & Fall find that the observational data is consistent with a SFR which increases until $`z1`$ and then decreases with further increases in redshift. Independent observations by Madau et al. of the metallicity enrichment rate (MER) are in excellent agreement with the Pei & Fall results. The direct quantitative support for Pei & Fall’s model comes from the Canada-France redshift survey of faint galaxies which found that the comoving UV luminosity density of the Universe shows a sharp decline from $`z1`$ to the present. Here, using the model of Pei and Fall, we parameterize the metal enrichment history (MER) observed by Madau et al. to predict the SN $`\mathrm{II}`$ relic $`\overline{\nu }_\mathrm{e}`$ flux at Earth. In the past, similar calculations of this relic $`\overline{\nu }_\mathrm{e}`$ flux have been done by Totani & Sato , Totani, Sato & Yoshii , Malaney and Hartmann & Woosley . In contrast to almost all of the above analyses, we strive to minimize any model dependences by directly relating the supernova rate and its evolution to observations of the metal enrichment history to obtain supernova rate and, thereby, the SRN flux. In our calculations we make always make “conservative” choices of any uncertain parameters so as to obtain a robust upper bound to the SRN flux. From this upper bound we will conclude that it is unlikely SuperK will detect these relic neutrinos (because the signal will be buried under a large background event rate) and that the event rate in SNO should be vanishingly small. In §II, we outline the formalism for calculating the flux of SRN at Earth. In §III, we calculate conservative (i.e., generous) upper bounds to the relic $`\overline{\nu }_\mathrm{e}`$ event rates at SuperK and SNO. In §IV, we review previous estimates of the SRN flux in comparison with ours and we examine neutrino oscillations as a possible mechanism for increasing the SN relic $`\overline{\nu }_\mathrm{e}`$ flux. ## II The Supernovae Relic Neutrino Spectrum The spectrum of neutrinos at Earth due to all past supernovae depends on the differential (per unit energy interval) neutrino flux from each SN, on the redshift distribution of the SN rate, and on an assumed Friedmann-Robertson-Walker cosmology which may be parameterized by the Hubble parameter $`H_0`$ and the matter density parameter $`\mathrm{\Omega }_0`$. For simplicity we ignore a possible cosmological constant at this point and discuss its effect later. If the supernova rate per unit comoving volume at redshift $`z`$ is $`\mathrm{N}_{\mathrm{S}\mathrm{N}}(z)`$ and the neutrino energy distribution at the source (at energy $`ϵ`$) is $`_\nu ^\mathrm{S}(ϵ)`$, then the differential flux of relic neutrinos at Earth is given by $$j_\nu (ϵ)=\frac{c}{H_0}_0^{\mathrm{}}𝑑z\frac{\mathrm{N}_{\mathrm{S}\mathrm{N}}(z)_\nu ^\mathrm{S}(ϵ^{})}{(1+z)\sqrt{1+\mathrm{\Omega }_0z}},$$ (1) where $`ϵ^{}=(1+z)ϵ`$ and the neutrinos are assumed to be massless. The angled brackets indicate that the dependence of the neutrino flux on supernova progenitors with different masses should be averaged over the initial mass function (IMF). In practice we will choose values for these average quantities so as to maximize the SRN background. The spectrum of the neutrinos from a supernova is parameterized as a Fermi-Dirac distribution with zero chemical potential, normalized to the total energy in a particular neutrino species ($`\mathrm{E}_\nu `$) emitted by the supernova, i.e., $`_\nu ^\mathrm{S}(ϵ)ϵ𝑑ϵ=\mathrm{E}_\nu `$. Then, for each neutrino species $`\nu `$, $$_\nu ^\mathrm{S}(ϵ)=\mathrm{E}_\nu \times \frac{120}{7\pi ^4}\times \frac{ϵ^2}{\mathrm{T}_\nu ^4}\times \left[\mathrm{exp}\left(\frac{ϵ}{\mathrm{T}_\nu }\right)+1\right]^1.$$ (2) The neutrino luminosity is thus characterized by $`\mathrm{E}_\nu `$ and $`\mathrm{T}_\nu `$ which, in turn, depend on the SN progenitor mass. However, the problem of obtaining the IMF-averaged neutrino flux simplifies because $`\mathrm{T}_\nu `$ does not vary rapidly as the SN progenitor mass is changed . Adopting a flat $`\mathrm{\Omega }_0=1`$ cosmology, and setting $`x1+z`$ , we can then write eq. 1 to a good approximation as $$j_\nu (ϵ)=𝒜\frac{\mathrm{E}_\nu }{\mathrm{T}_\nu ^4}ϵ^2_1^{\mathrm{}}𝑑x\mathrm{N}_{\mathrm{S}\mathrm{N}}(x)\frac{\sqrt{x}}{\mathrm{exp}(ϵx/\mathrm{T}_\nu )+1},$$ (3) where $`𝒜=(120/7\pi ^4)cH_0^1=1056h_{50}^1`$ Mpc, with $`H_0=50h_{50}`$ km/s/Mpc. The results of Woosley et al. for a $`25\mathrm{M}_{}`$ supernova progenitor are used to fix $`\mathrm{E}_\nu =11\times 10^{52}\mathrm{ergs}`$ and $`\mathrm{T}_\nu =5.3\mathrm{MeV}`$. The values of $`\mathrm{E}_\nu `$ and $`\mathrm{T}_\nu `$ characterize the detectable $`\overline{\nu }_\mathrm{e}`$ supernova neutrino spectrum . For comparison, recall that the data from SN 1987A gave $`\mathrm{E}_\nu =8\times 10^{52}\mathrm{ergs}`$ and $`\mathrm{T}_\nu =4.8\mathrm{MeV}`$ . Another issue concerns the role of the IMF and selection of the $`\mathrm{E}_\nu `$ and $`\mathrm{T}_\nu `$ values as averages. To obtain an upper bound to the detection rate, we use values of $`\mathrm{E}_\nu `$ and $`\mathrm{T}_\nu `$ which provide upper bounds to any reasonable average. This is possible because the flux integrated over the observable energy window (and hence the event rate) is an increasing function of both $`\mathrm{E}_\nu `$ and $`\mathrm{T}_\nu `$. The value of $`\mathrm{T}_\nu `$ is particularly insensitive to the progenitor mass because $`\mathrm{T}_\nu `$ derives its value from the temperature of the neutrinosphere formed during the collapse and the thermodynamic properties of the neutrinosphere (as long as it is well-defined) do not vary much with mass . In particular, $`\mathrm{T}_\nu 45\mathrm{MeV}`$ with 5.3 MeV being at the the upper end of the range. Thus, guided by models and SN 1987A, we parameterize the neutrino flux from supernovae so as to obtain a conservative upper bound to the SRN event rate. Under the assumption that the supernova rate tracks the metal enrichment rate, the supernova rate used to calculate the relic neutrino flux can be written as $$\mathrm{N}_{\mathrm{S}\mathrm{N}}(z)=\frac{\dot{\rho }_\mathrm{Z}(z)}{\mathrm{M}_\mathrm{Z}},$$ (4) where $`\mathrm{M}_\mathrm{Z}`$ is the average yield of “metals” ($`\mathrm{Z}6`$) per supernova and $`\dot{\rho }_\mathrm{Z}`$ is the metal enrichment rate per unit comoving volume. We have implicitly assumed that the metals come from SN $`\mathrm{II}`$ consistent with nucleosynthesis arguments which show that the metal enrichment role of SN Ia is secondary to that of SN $`\mathrm{II}`$ . In any case, by neglecting SN Ia we overestimate the SRN flux (albeit, by only a factor of 2 at most). The other point to be noted here concerns our use of the metal enrichment history instead of the possibly more direct SFR to compute the SN rate. Both the SFR and the metal enrichment rate can be inferred from observations of the UV luminosity of star forming galaxies. But unlike the SFR which has a steep dependence on the adopted IMF, the metal enrichment rate is less sensitive to the IMF because the same (heavier) stars which are more UV luminous also eject more metals . Thus, the SN rate is more closely tied to the metal enrichment history than to the SFR. To parameterize the evolution of $`\dot{\rho }_\mathrm{Z}(z)`$ from the present back to $`z=1`$, we use the results of Pei & Fall. In particular, we use the comoving metal production rate for their case with infall (Fig.1 of ) which is in good quantitative agreement with SFR observations at $`z<1`$ . Since the neutrino flux from individual supernovae falls rapidly with increasing energy, and the lower energy neutrinos from high redshift supernovae are redshifted below the threshhold of detectability, our predictions are relatively insensitive to the high redshift ($`z>1`$) behavior (as also noted by Hartmann & Woosley ). This is fortunate since it is difficult to quantify precisely the $`z>1`$ evolution. For these reasons, we make the simplifying conservative assumption that the supernovae rate remains constant at higher redshifts: $`\mathrm{N}_{\mathrm{S}\mathrm{N}}(z>1)=\mathrm{N}_{\mathrm{S}\mathrm{N}}(z=1)`$. It should be noted that the $`z<1`$ evolution adopted here is likely quite robust in that independent studies reveal the same pattern of evolution (including, for example, that of the QSO luminosity density ) and the different observational data are in good quantitative agreement. Nevertheless, some changes to our adopted chemical enrichment history could be envisaged based on the arguments that the role of dust at high redshifts is still uncertain and, perhaps not all the star formation at higher redshifts has been observed . However, the relative insensitivity of our upper bound to the high redshift behavior insulates it against such uncertainty. To determine the average amount of metals ejected per supernova, the results of the calculations of supernova nucleosynthesis by Woosley and Weaver are employed. From their published tables of the elemental composition of the ejecta, it can be ascertained that the heavy element yield ranges from $`\mathrm{M}_\mathrm{Z}=1.1\mathrm{M}_{}`$ for a $`15\mathrm{M}_{}`$ SN progenitor to $`\mathrm{M}_\mathrm{Z}=4.2\mathrm{M}_{}`$ for $`25\mathrm{M}_{}`$. These results are for an initial metallicity equal to $`0.1\mathrm{Z}_{}`$, which we assume characterizes the metallicity at redshifts around unity . In any case, the $`\mathrm{M}_\mathrm{Z}`$ values for SN progenitors with initial metallicity equal to $`\mathrm{Z}_{}`$ is greater by about 10-20$`\%`$ which, if used, would lead to a decrease in the predicted flux. Keeping in mind that the rate of events varies inversely as $`\mathrm{M}_\mathrm{Z}`$, we set $`\mathrm{M}_\mathrm{Z}`$ equal to $`1\mathrm{M}_{}`$ in the interest of obtaining an unambiguous upper bound. In Figure 1 we show the SRN spectrum that results from our adopted metal enrichment history and a conservative lower bound to the SN metallicity yields, $`\mathrm{M}_\mathrm{Z}=1\mathrm{M}_{}`$. ## III Event rate at SuperK and SNO It is not possible to detect SN relic neutrinos at all energies. For SuperK the observable energy window is likely to be from 19 to 35 MeV. Below 10 MeV, the $`\overline{\nu }_\mathrm{e}`$ from reactors and the Earth will completely overwhelm the relic neutrinos. Above 10 MeV and below the observable energy window, the main source of background is due to the solar neutrinos, radiation from outside the fiducial volume and spallation-produced events due to the cosmic-ray muons in the detector . Above 19 MeV the background is primarily due to atmospheric neutrinos . At energies greater than about 35 MeV, the rapidly (exponentially) falling SRN flux (peaked around 3 MeV) becomes smaller than the atmospheric $`\overline{\nu }_\mathrm{e}`$ flux, as can be verified from Figure 1. Therefore the observable flux is obtained by integrating the differential flux over the neutrino energy range from 20.3 to 36.3 MeV (since $`ϵ=E_e+1.3`$ MeV where $`E_e`$ is the energy of the positron and 1.3 MeV is the neutron – proton mass difference). We will also quote results in the more optimistic energy window of 15 – 35 MeV in the hope that with better background subtraction, SuperK will be able to probe these lower energy relic neutrinos. Detection of the SRN background in the much smaller SNO detector may be possible using coincident neutrons from $`\overline{\nu }_\mathrm{e}Dnne^+`$. Because neutron detection at SNO is still in its infancy, we quote the total SNO event rate for positron energies above 10 MeV, corresponding to our SRN background. To calculate the event rate at SuperK, the detector is assumed to be 100$`\%`$ efficient in the observable energy window. The dominant reaction is $`\overline{\nu }_\mathrm{e}pne^+`$ with a cross section ($`\sigma _p(ϵ)`$) two orders of magnitude larger than that of the scattering reaction ($`\nu _\mathrm{e}e\nu _\mathrm{e}e`$). The differential event rate in the interval $`dϵ`$ is then $`N_p\sigma _p(ϵ)j_\nu (ϵ)dϵ`$ and the predicted event rate at the detector is: $$\mathrm{R}=𝒜\mathrm{N}_\mathrm{p}\frac{\mathrm{E}_\nu }{\mathrm{T}_\nu ^4}_1^{\mathrm{}}𝑑x\mathrm{N}_{\mathrm{S}\mathrm{N}}(x)\sqrt{x}_{ϵ_1}^{ϵ_2}𝑑ϵ\frac{ϵ^2\sigma _p(ϵ)}{\mathrm{exp}(xϵ/\mathrm{T}_\nu )+1},$$ (5) where the $`ϵ_i`$ delineate the energy window (for this case, 20.3 and 36.3, respectively) and $`\mathrm{N}_\mathrm{p}`$ is the number of free protons in the detector. For SuperK, with a fiducial volume of 22.5 ktons, $`\mathrm{N}_\mathrm{p}=1.51\times 10^{33}`$. Using the metal enrichment history to establish the supernova rate, the SN relic $`\overline{\nu }_\mathrm{e}`$ event rate at SuperK can be written as $$\mathrm{R}=0.066\left(\frac{\mathrm{M}_{}}{\mathrm{M}_\mathrm{Z}}\right)\left(\frac{\mathrm{E}_\nu }{10^{52}\mathrm{ergs}}\right)\left(\frac{\mathrm{T}_\nu }{\mathrm{MeV}}\right)\frac{\mathrm{events}}{22.5\mathrm{kton}\mathrm{year}}$$ (6) where we use $`\sigma _p(ϵ)=9.52\times 10^{44}E_ep_e\mathrm{cm}^2`$ with $`E_e`$ and $`p_e`$ (the energy and momentum of the positron) measured in $`\mathrm{MeV}`$. We have set $`h_{50}=1`$ in the interest of obtaining an upper bound to the event rate. Also, for the same reason the average metal yield per supernova is taken to be $`1\mathrm{M}_{}`$, a lower bound to that obtained in the Woosley & Weaver models. For completeness we show in Figure 1 the differential rate of $`\overline{\nu }_\mathrm{e}pne^+`$ for our SRN background, with $`\mathrm{E}_\nu =11\times 10^{52}\mathrm{ergs}`$ and $`\mathrm{T}_\nu =5.3\mathrm{MeV}`$. With our adopted SN parameters, the SRN event rate for a 22.5 kton-year exposure at SuperK is predicted to be $`\mathrm{R}`$ $`<`$ $`4\mathrm{events}19\mathrm{E}_\mathrm{e}(\mathrm{MeV})35,`$ (7) $`\mathrm{R}`$ $`<`$ $`7\mathrm{events}15\mathrm{E}_\mathrm{e}(\mathrm{MeV})35.`$ (8) Because the SRN spectrum falls rapidly with energy, the energy distribution of the events is strongly peaked at about 10 MeV (see Fig. 1; in 5 MeV bins from 10 MeV to 40 MeV, the percentages are 37:29:17:10:5:2). If the threshold could be lowered to 10 MeV, our upper bound to the event rate at SuperK would increase to about 10/year. In terms of the flux at the detector, the results are as follows: the upper bound to the SRN flux integrated over all energies is $`54\text{c}\mathrm{m}^2\mathrm{sec}^1`$ while in the relevant energy window from 19 to 35 MeV, the flux is $`1.6\text{c}\mathrm{m}^2\mathrm{sec}^1`$ (to be compared to the current upper bound of $`226\text{c}\mathrm{m}^2\mathrm{sec}^1`$). In the larger energy window from 15 to 35 MeV, the observable flux is $`3.7\text{c}\mathrm{m}^2\mathrm{sec}^1`$. The reason for the large difference between the total and observable flux is two-fold. One, the observable energy window only captures the falling tail-end of the SRN spectrum and two, the event rate at low energies is artificially enhanced due to the SN rate which was assumed to be constant at high redshifts. In the energy window from 19 to 35 MeV, the expected background event rate from the atmospheric $`\overline{\nu }_\mathrm{e}`$ interacting with the protons ($`\overline{\nu }_\mathrm{e}pne^+`$) in the detector can be calculated. Using the atmospheric neutrino flux from Gaisser et al. , the event rate for this background to the SRNs is only about $`0.5/\mathrm{yr}`$ (for 22.5 ktons of water). However, there is another source of background which is dominant. The atmospheric muon neutrinos interacting with the nucleons (both free and bound) in the fiducial volume produce muons. If these muons are produced with energies below Cerenkov radiation threshold (kinetic energy less than 53 MeV), then they will not be detected, but their decay-produced electrons and positrons will. Consequently, the muon decay signal will mimic the $`\overline{\nu }_\mathrm{e}pne^+`$ process in SuperK. The event rate from these muon decays was estimated to be around unity for 0.58 kton-yr exposure of the Kamiokande II detector, forming the principal source of background after the various cuts had been implemented. Extrapolating to the fiducial volume of 22.5 ktons for SuperK, we expect that SuperK should see $``$39 events/yr as background to the SRN events. Although our predicted signal is much smaller than the sub-Cerenkov muon background, it may still be detectable because the energy distributions of the signal and the background are distinctly different. In such a case, a conservative criterion for the detectability of the signal is that it be greater than than the statistical fluctuations of the background. However, even with three years of data and assuming that the SRN flux is close to our upper bound, the SRN signal is only just about equal to the statistical fluctuations in the sub-Cerenkov muon background. This situation will improve, though not dramatically, if SuperK can lower its threshold (to SRN) to 15 MeV. Lastly we mention the SNO detector. Although much smaller than SuperK, the 1 kton SNO hopes to detect the SRN background by using the unique 2 neutron final state in $`\overline{\nu }_\mathrm{e}Dnne^+`$. Using the cross section of Kubodera and Nozawa , the upper bound to the event rate above 10 MeV is a not-very-promising 0.1/yr/kton. Again we show the differential event rate in Figure 1. Note, however, that unlike the SRN signal in Super-K, this rate can be influenced by the large $`z`$ SN rate, about which we know little. ## IV Discussion and Conclusions ### A Previous works Supernova relic neutrinos have been the focus of many previous studies . The fluxes predicted in these studies spread over some two orders of magnitude, primarily due to the uncertain determinations of the present number density of galaxies, the SN rate in our galaxy at present, and/or the SN redshift distribution. More recently, Totani et al. used the population synthesis method to model the evolution of star-forming galaxies and they obtained a prediction for the flux of SRN. They found an event rate at SuperK (in the energy interval from 15 to 40 MeV) of $`1.2\mathrm{yr}^1`$ and the “most optimistic” prediction for their model was an event rate of 4.7/yr. Malaney used the Pei & Fall results in order to parameterize the evolution of the cosmic gas density which he then uses to calculate the star formation rate and, from that, the past supernova rate, finding a total SRN flux, integrated over all energies, of $`2.05.4\text{c}\mathrm{m}^2\mathrm{sec}^1`$ depending on somewhat arbitrary low redshift corrections to the supernova rate. The work of Hartmann & Woosley , using a SN rate proportional to $`(1+z)^4`$ (motivated by )and normalized to the SN rate at present as derived from the $`H\alpha `$ observations of the local Universe is most similar to ours. Their “best” estimate of a relic neutrino flux is $`0.2\text{c}\mathrm{m}^2\mathrm{sec}^1`$. Although they do utilize Pei & Fall beyond $`z1`$, they (and we and others) have noted that this contribution is subdominant. However, Hartmann & Woosley do not discuss the backgrounds to detecting the SRN and although their estimated flux is smaller than our upper bound by about a factor of five, they conclude the SRN may somehow be detectable. Although we agree with the Hartmann & Woosley estimate of the SRN flux in the sense that if we adopted their choices of parameters rather than our “conservative” choices we would predict the same flux, we disagree that this small flux is detectable. All these previous results are similar to, while less than the upper bound obtained in this paper. In fact, if we use our analysis of the SN rate along with the same IMF as used by Totani et al. for their spiral galaxies (which harbor most of the Type II supernovae ), our SRN event rate in the 15 to 40 MeV range (for comparison with Totani et al. ) at SuperK falls to 1/yr. The total integrated flux falls to $`11\text{c}\mathrm{m}^2\mathrm{sec}^1`$ while the result for the flux in the 15 to 40 MeV energy window becomes $`0.5\text{c}\mathrm{m}^2\mathrm{sec}^1`$. These estimates agree well with those quoted from the previous works . In fact, the value of 1 event/yr obtained using the IMF from Totani et al. amounts to choosing the variables, $`\mathrm{M}_\mathrm{Z}`$, $`\mathrm{T}_\nu `$ and $`\mathrm{E}_\nu `$ for an actual IMF rather than the extrema we have selected. It is more likely that any realistic IMF (chosen to fit other observables) when combined with a SN rate that peaks at $`z1`$ (as implied by the metal enrichment history) will yield an event rate that is an order of magnitude smaller than the upper bound we quote. Our upper-bound is robust because it is derived directly from the metal enrichment history which suggests that the SN $`\mathrm{II}`$ rate can peak no earlier than $`z1`$. ### B Choice of Cosmology Throughout, we have assumed that $`\mathrm{\Omega }_0=1`$ and $`q_0=0.5`$. It is of some interest to ask how our SRN background predictions change if we change the background cosmology. Reducing the non-relativistic matter density from critical ($`\mathrm{\Omega }_0<1`$), by allowing positive curvature and/or a cosmological constant $`\mathrm{\Lambda }`$, would reduce the expansion rate at late times and thereby increase the SRN flux, for the same $`H_0`$. The event rate increases by about 40% in going from an $`\mathrm{\Omega }_0=1`$ to an $`\mathrm{\Omega }_0=0.3,\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$ Universe. But the estimation of the luminosity density (which is used to derive the metal enrichment rates) itself requires the assumption of a background cosmology and it typically increases less rapidly with redshift for cosmologies with smaller $`\mathrm{\Omega }_0`$ . These two effects tend to cancel out leaving the expected event rate nearly unchanged. Thus we do not expect our results to change substantially for a different background cosmology. ### C Neutrino Oscillations The main goal of our work has been to obtain the most optimistic estimate of the SRN event rate at SuperK with the intent that if results from SuperK should exceed this upper bound, it could provide hints of new physics beyond the standard model. Here, we consider neutrino oscillations as a mechanism for maximizing the SRN flux. Since $`\overline{\nu }_\mathrm{x}`$ (where x$`=\mu \mathrm{or}\tau `$) only experience neutral current interactions, they decouple deeper in the SN where the temperature is higher. As a result, they stream out of the SN with a higher temperature than the $`\overline{\nu }_\mathrm{e}`$. Because higher energy neutrinos are easier to detect, $`\overline{\nu }_\mathrm{e}\overline{\nu }_\mathrm{x}`$ oscillations have the potential to increase the SRN event rate. The maximum effect for any scenario is attained when the mixing is maximal. We will assume a mass hierarchy wherein the electron neutrino is the lightest (however, for an inverted mass hierarchy and resonant conversion in the presence of magnetic fields, see ). This implies that the MSW resonance condition is not satisfied for $`\overline{\nu }_\mathrm{e}\overline{\nu }_\mathrm{x}`$, but vacuum oscillations can still occur. If all three flavors are maximally mixed, then the oscillation probabilities average out to 1/3 for any reasonable choice of mass differences because of the large distances traversed by the the relic neutrinos (typically of order of $`H_0^1`$; for the oscillation length to be comparable to this, $`\mathrm{\Delta }m^210^{25}`$ eV<sup>2</sup>). Such oscillations would make two-thirds of the original $`\overline{\nu }_\mathrm{e}`$ flux hotter as they would be “born” (would have oscillated from $`\overline{\nu }_\mathrm{x}`$) with the same temperature as the $`\overline{\nu }_\mathrm{x}`$. To quantify the discussion here, we take $`\mathrm{T}_{\overline{\nu }_\mathrm{x}}=2\mathrm{T}_{\overline{\nu }_\mathrm{e}}`$ (we might be exaggerating the spectral difference between $`\overline{\nu }_\mathrm{x}`$ and $`\overline{\nu }_\mathrm{e}`$ considerably here ) and assume that the same amount of energy is expelled in all three flavors. This leads to an upper bound to the SRN event rate at SuperK of 11/yr with an observable flux of $`4\text{c}\mathrm{m}^2\mathrm{sec}^1`$ in the 19 – 35 MeV energy window. The upper bound is larger by about a factor of 3 as a result of the increase in the number of neutrinos in the exponential tail of the neutrino distribution (where the observable energy window lies), due to the increase in temperature. As before, $`\mathrm{M}_\mathrm{Z}`$ has been set equal to $`1\mathrm{M}_{}`$. For the case where $`\overline{\nu }_\mathrm{e}`$ is maximally mixed with only one of either $`\overline{\nu }_\mu `$ or $`\overline{\nu }_\tau `$, the upper bounds are 9/yr and $`3\text{c}\mathrm{m}^2\mathrm{sec}^1`$ for the event rate and observable flux respectively. The upper bounds in the 15 – 35 MeV energy window are 14/yr for the three neutrino maximal mixing case, and 12/yr for the two neutrino maximal mixing case. In general, a decrease in the threshold (below 19 MeV) and neutrino oscillations seem to be required to boost the SRN flux to sufficient levels. Because the spectral shape of this oscillation enhanced SRN signal is sufficiently different from the sub-Cerenkov muon background, it may be detectable as a distortion of the expected muon background, if the SRN flux is in the vicinity of the upper bound we have quoted. For SNO, the two neutrino maximal mixing case gives an event rate of 0.25/yr/kton while the three neutrino maximal mixing increases the rate to 0.29/yr/kton. A point to clarify here concerns the selection of the observable energy window given that the event rate in 19 – 35 MeV window is now relatively large. Due to the larger signal, the relic neutrinos only become sub-dominant to the atmospheric neutrinos around 60 MeV (for a relic neutrino flux close to the upper bound). In fact, integrating out to a neutrino energy of order 60 MeV would increase the SRN event rate by about 50%. However the background from muon decay would increase by more than a factor of 3. One possibility this opens up is to also use the energy window from about 55 to 70 MeV since the muon-decay background is cut off at $`m_\mu /2`$ (as decay occurs for muons at rest). For the same oscillation parameters as before, the upper bound to the event rate in the 55 to 70 MeV energy window is about 1/yr. However, the event rate due to the atmospheric neutrinos in the same energy window is of comparable magnitude and so, the situation is still not promising. ### D Conclusions Using those observations most closely connected to the metal enrichment history of the Universe in order to relate the MER to the SNR, we have derived a robust upper bound to the supernova relic neutrino events at SuperK: 4 events in the energy window from 19 to 35 MeV for a 22.5 kton-yr exposure. We have argued that the SuperK signal is dominated by SN $`\mathrm{II}`$ from $`z<1`$ and so it is insensitive to the high redshift behavior of the metal enrichment rate. We use only the generic features of gravitational collapse SN models which have been substantiated by observations of SN 1987A to characterize the $`\overline{\nu }_\mathrm{e}`$ spectrum emergent from SN $`\mathrm{II}`$ . In combination, these facts argue for the robustness of the upper bound to the SRN event rate obtained here. In addition, we have analyzed the backgrounds to the SRN events and conclude it is unlikely that SuperK will be able to detect these SRN neutrinos, unless the Type II supernova rate does not track the metal enrichment rate, or the observations of the star formation rate which lead to estimates of the metal enrichment rate at $`z<1`$ are in considerable error, and/or some physics beyond the standard model is at play. We also find that the event rate at SNO will most likely be too small to be detected. The effect of flavor oscillations on the SRN flux has also been studied and the maximum possible increase in the event rate is less than a factor of 3. If the original flux is close to the upper bound quoted here and the mixing close to maximal, SuperK just might see the SRN flux as a distortion in its sub-Cerenkov muon background. ### E Acknowledgments We acknowledge DOE support at The Ohio State University (DE-AC02-76ER01545) and at University of Chicago (DE-FG02-90ER40560). We thank John Beacom for useful discussions and for pointing out an error in an earlier version of this paper.
no-problem/9912/astro-ph9912335.html
ar5iv
text
# On Pair Content and Variability of Sub-Parsec Jets in Quasars ## 1 INTRODUCTION One of the basic unresolved questions regarding the nature of jets in radio loud quasars is that of their composition: are they made from protons and electrons, or electron-positron pairs, or from a mixture of both? Arguments in favor of proton-electron jets in quasars have been recently advanced by Celotti & Fabian (1993). Using synchrotron self-Compton constraints from radio-core observations and information about energetics of jets from radio-lobe studies, those authors showed that in the case of pure electron-positron jets the required number of e<sup>+</sup>e<sup>-</sup> pairs is too high to be delivered from the central engine. The limit is imposed by the annihilation process (Guilbert, Fabian, & Rees 1983). On the other hand, the recently discovered circular polarization in the radio cores of the $`\gamma `$–ray bright OVV quasar 3C 279 and several other objects and its interpretation via the “Faraday conversion” process suggests that the jet plasma is dominated by e<sup>+</sup>e<sup>-</sup> pairs (Wardle et al. 1998; Wardle & Homan 1999). The fact that jets are likely to be pair dominated has also been inferred from synchrotron self-Compton analyses of compact radio components in radio galaxy M87 (Reynolds et al. 1996), and in quasar 3C 279 (Hirotani et al. 1999). In this paper we derive constraints imposed on the pair content of quasar jets by X–ray observations of OVV quasars, i.e. those radio loud quasars which are observed at very small angles to the jet axis, and often detected in the MeV - GeV $`\gamma `$–ray regime. Our results suggest that the pair content of quasar jets is high, but that dynamically the jets are dominated by protons. The question of composition of the jet plasma is closely related to that of the formation of the jet. Jets can be launched as outflows dominated by Poynting flux, generated in the force-free magnetosphere of the black hole, or as hydromagnetic winds driven centrifugally from an accretion disc (see, e.g., a review by Lovelace et al. 1999). Electromagnetically dominated outflows are converted to pair dominated jets (Romanova & Lovelace 1997; Levinson 1998), whilst hydromagnetic winds give rise to proton-electron dominated jets (Blandford & Payne 1982). While the pair-dominated jets are predicted to be relativistic, the proton-electron jets can be either relativistic or non-relativistic, depending on whether the magnetic forces dominate over gravity in the accretion disc corona (Meier et al. 1997). In particular, relativistic hydromagnetic jets can be launched deeply in the ergosphere of fast-rotating black holes (Koide et al. 1999). If this is the case, and the surrounding accretion disc corona is as hot as inferred from the spectra of Seyfert galaxies (see, e.g., Zdziarski et al. 1994; Matt 1999), the proton-electron jets can be pair-loaded via the interactions with the coronal hard X–rays / soft $`\gamma `$–rays. We demonstrate that this process is efficient enough to provide the number of pairs required to account for the observed X–ray spectra of OVV quasars. Furthermore, the rapid X–ray variability of Seyferts (Green et al. 1993; Hayashida et al. 1998) indicates that the corona is likely to have dynamical character (as would be if it is powered by magnetic flares), and thus the hydromagnetic outflows are expected to be loaded by pairs nonuniformly and non-axisymmetrically. This, together with the radiation drag imposed by the coronal and disc radiation fields on pairs, can lead to a modulation of the velocity and density of the plasma in a jet, and therefore to production of shocks and acceleration of particles responsible for the variable nonthermal radiation. Our paper is organized as follows: In §2 we demonstrate that pure pair jets overpredict the soft X–ray flux; in §3 we show that jets which are dynamically dominated by protons must be heavily pair-loaded in order to provide the observed nonthermal X–ray radiation; and in §4 we show that hydromagnetic outflows can be pair-loaded due to interaction with hard X–rays / soft $`\gamma `$–rays from the hot accretion disc corona. Our results are summarized in §5. ## 2 ELECTRON-POSITRON JETS ### 2.1 Cold pairs If the relativistic jet is dynamically dominated by cold e<sup>+</sup>e<sup>-</sup> pairs, then the external UV photons will be Comptonized by those pairs and thus boosted in frequency by a square of a bulk Lorentz factor $`\mathrm{\Gamma }`$ and collimated along the jet axis. In this case, in addition to the nonthermal radiation from the jet – which results in the phenomenon called blazar – the observer located at $`\theta _{obs}1/\mathrm{\Gamma }`$ should see a soft X–ray “bump” superimposed on the continuum in OVV quasars. Such spectral feature, predicted theoretically by Begelman & Sikora (1987), has not been observed, and this fact can be used to derive an upper limit for a pair content of quasar jets (Sikora et al. 1997). Luminosity of the soft X–ray bump produced by the above “bulk-Compton” (BC) process is $$L_{BC}𝒜_{r_0}\frac{1e^{\tau _j}}{\tau _j}\left|\frac{dE_e}{dt}\right|n_e𝑑V,$$ (1) where $$\left|\frac{dE_e}{dt}\right|=m_ec^2\left|\frac{d\mathrm{\Gamma }}{dt}\right|=\frac{4}{3}c\sigma _Tu_{diff}\mathrm{\Gamma }^2,$$ (2) $$u_{diff}=\frac{\xi L_d}{4\pi r^2c},$$ (3) $`𝒜`$ is the beaming amplification factor, $`L_d`$ is the luminosity of an accretion disc, $`\xi `$ is the fraction of the accretion disk which at the given distance $`r`$ is isotropized due to rescattering or reprocessing, $`n_e`$ is the density of electrons plus positrons, $`dV=\pi a^2dr`$, $`\tau _j=n_ea\sigma _T`$, $`a`$ is the cross-section radius of a jet, and $`r_0`$ is the distance at which the jet is fully formed (accelerated and collimated). Assuming that jet is conical and that at $`r>r_0`$ the electron/positron number flux is conserved (no pair production), we have $`n_e1/r^2`$, and $`\tau _j=r_1/r`$, where by $`r_1`$ we denote the distance at which $`\tau _j=1`$. Then, for jets with a half-angle $`\theta _ja/r1/\mathrm{\Gamma }`$, Eqs. (1)-(3) give $$L_{BC}\frac{1}{3}(\mathrm{\Gamma }\theta _j)(\xi L_d)\mathrm{\Gamma }^3\{\begin{array}{cc}\mathrm{ln}\frac{r_1}{r_0}+1\hfill & \text{if }\tau _j(r_0)>1\hfill \\ \frac{r_1}{r_0}\hfill & \text{if }\tau _j(r_0)<1\hfill \end{array},$$ (4) where we used $`𝒜=\mathrm{\Gamma }^2`$. The value of $`r_1`$ depends on the electron/positron number flux, $`dN_e/dt`$, which – for energy flux in a jet $`L_j`$ dominated by kinetic energy flux of cold pairs – is equal to $`L_j/m_ec^2\mathrm{\Gamma }`$. Noting that $`dN_e/dtn_ec\pi a^2`$, we obtain $$r_1=\frac{1}{\sigma _Tn_e\theta _j}\frac{\sigma _TL_j}{\pi m_ec^3\theta _j\mathrm{\Gamma }}10^{17}\frac{L_{j,46}}{\theta _j\mathrm{\Gamma }}\mathrm{cm},$$ (5) If $`r_0>r_1`$ then $`r_1`$ should be treated only as a formal parameter which provides normalization of $`\tau _j`$. However, noting the very large value of $`r_1`$ one can expect that $`r_0<r_1`$ and, therefore, the predicted Bulk-Compton luminosity is $$L_{BC}>3\times 10^{47}(\mathrm{\Gamma }\theta _j)(\xi L_d)_{45}(\mathrm{\Gamma }/10)^3(\mathrm{ln}(r_1/r_0)+1)\mathrm{erg}\mathrm{s}^1.$$ (6) The BC spectral component peaks at $`h\nu \mathrm{\Gamma }^2h\nu _{UV}(\mathrm{\Gamma }/10)^2`$ keV, where typical luminosities observed in OVV quasars are $`10^{46}`$ erg s<sup>-1</sup> (Sambruna 1997), while the spectra are consistent with simple power laws (cf. Kubo et al. 1998). Thus, one can conclude that pure electron-positron jet models can be excluded as overpredicting soft X–ray radiation of OVV quasars. ### 2.2 Relativistic “thermal” pairs It is of course possible that because of inefficient cooling of electrons/positrons below a given energy, the multiple reacceleration process balances adiabatic losses in the conically diverging jet, and the pairs, once accelerated, remain relativistic forever. If the relativistic electrons are narrowly distributed around some $`\overline{\gamma }`$, then taking into account that $`|dE_e/dt|\mathrm{\Gamma }^2\overline{\gamma }^2`$, $`n_eL_j/\overline{\gamma }`$, and $`r_1r_1(\overline{\gamma }=1)/\overline{\gamma }`$, and assuming $`r_0>r_1`$, the bulk-Compton luminosity would be $$L_{BC}=3\times 10^{47}(\mathrm{\Gamma }\theta _j)(\xi L_d)_{45}(\mathrm{\Gamma }/10)^3(r_1/r_0)\overline{\gamma }3\times 10^{48}\frac{(\xi L_d)_{45}(\mathrm{\Gamma }/10)^3L_{j,46}}{(r_0/10^{16}\mathrm{cm})}\mathrm{erg}\mathrm{s}^1,$$ (7) and would peak at $`h\nu \mathrm{\Gamma }^2\overline{\gamma }^2\nu _{UV}\overline{\gamma }^2(\mathrm{\Gamma }/10)^2`$ keV. No such “bumps” have been detected at keV energies. Alternatively, one can speculate that the X–ray spectra of OVV quasars consist of superposed multiple “thermal” peaks produced over several decades of distance (cf. Sikora et al. 1997). In this case it may be possible to match the observed X–ray spectral slopes, but nonetheless, the model predicts too large luminosity. ## 3 PROTON-ELECTRON JETS The other extreme would have no e<sup>+</sup>e<sup>-</sup> pairs in a jet. For a given energy flux in the jet, $`L_j`$, which now is proportional to $`n_pm_p`$, the number of electrons in a jet is $`m_e/m_p`$ times smaller than the number of electrons plus positrons in the jet made from cold pairs. Thus, noting that $`r_1(n_e=n_p)=(m_e/m_p)r_1(n_p=0)0.5\times 10^{14}`$ cm, one can find that the proton-electron jets do not overproduce the soft X–ray luminosities observed in OVV quasars, provided that $`r_015r_110^{15}`$ cm. However, such pure proton-electron jets are relatively inefficient in producing the nonthermal radiation, and this is for the same reason as above – low number of electrons. This is apparent from a study of the low energy tails of the nonthermal radiation components, where the requirement for the number of electrons is largest. In the case of synchrotron radiation, such tails are not observed because they are self-absorbed, and the only spectral band where the presence of the lower energy relativistic electrons can be evident are the soft and mid-energy X–rays, 0.1 – 20 keV. It has been shown in many papers (see, e.g., Sikora, Begelman, & Rees 1994) that the $`\gamma `$–ray spectra observed by the EGRET instrument in OVV quasars are likely to be produced by Comptonization of external diffuse radiation field, via the so-called external radiation Compton (ERC) process. Can this process be also responsible for the X–ray spectra of OVV quasars? In the one zone model and for the narrow spectral distribution of the soft radiation field, the power law X–ray spectra are produced by electrons with energy distribution obeying $`n_\gamma ^{}=C_n\gamma ^s`$, where $`s=2\alpha _X+1`$ (note that throughout this paper, all quantities except for $`\gamma `$ are primed if measured in the frame co-moving with the jet). Assuming that at a distance $`r_{fl}`$, where the blazar phenomenon is produced, all available electrons are accelerated and that energy flux in the jet is dominated by cold protons, i.e., that $$n_e^{}=_{\gamma _{min}}n_\gamma ^{}𝑑\gamma =n_p^{}\frac{L_j}{m_pc^3\mathrm{\Gamma }^2\pi a^2},$$ (8) we obtain $$C_n=\frac{(s1)\gamma _{min}^{s1}L_j}{m_pc^3\pi a^2\mathrm{\Gamma }^2}.$$ (9) The ERC luminosity is then given by $$(L_\nu \nu )_{ERC}\mathrm{\Gamma }^4(L_\nu ^{}^{}\nu ^{})_{ERC}\mathrm{\Gamma }^4(\frac{1}{2}N_\gamma \gamma )m_ec^2\left|\frac{d\gamma }{dt^{}}\right|_{ERC}$$ $$2.4\frac{\sigma _T}{m_pc^2}au_{diff}L_j\mathrm{\Gamma }^4\alpha _X\gamma _{min}^{2\alpha _X}\gamma ^{2(1\alpha _X)},$$ (10) where $$m_ec^2\left|\frac{d\gamma }{dt^{}}\right|_{ERC}(16/9)c\sigma _Tu_{diff}\mathrm{\Gamma }^2\gamma ^2,$$ (11) $$N_\gamma \frac{4}{3}\pi a^3n_\gamma ^{},$$ (12) and $$\nu (4/3)\mathrm{\Gamma }^2\gamma ^2\nu _{ext}.$$ (13) The external radiation field in quasars, as seen in the jet frame at a distance $$r_{fl}=\frac{a}{\theta _j}\frac{ct_{fl}\mathrm{\Gamma }}{\theta _j}2.6\times 10^{17}\frac{(t_{fl}/1\mathrm{d})(\mathrm{\Gamma }/10)^2}{(\theta _j\mathrm{\Gamma })}\mathrm{cm},$$ (14) where $`t_{fl}`$ is the time scale of duration of flares observed in OVV quasars, is dominated by two components, broad emission lines and near infrared radiation from hot dust. The broad emission lines provide radiation field with $`h\nu _{ext}10`$ eV and $$u_{diff(BEL)}\frac{L_{BEL}}{4\pi r_{fl}^2c}3.9\times 10^2\frac{L_{BEL,45}(\theta _j\mathrm{\Gamma })^2}{(t_{fl}/1\mathrm{d})^2(\mathrm{\Gamma }/10)^4}\mathrm{erg}\mathrm{cm}^3.$$ (15) The spectrum of the infrared radiation from hot dust, on the other hand, peaks around $`h\nu _{ext}=3kT0.26(T/1000)`$ eV, and at a distance $`r_{fl}<r_{IR}=\sqrt{L_d/4\pi \sigma _{SB}T^4}`$ , has energy density $$u_{diff(IR)}\xi _{IR}\frac{4\sigma _{SB}}{c}T^47.6\times 10^3\xi _{IR}\left(\frac{T}{1000\mathrm{K}}\right)^4\mathrm{erg}\mathrm{cm}^3,$$ (16) where $`T`$ is the temperature of dust, $`\sigma _{SB}`$ is the Stefan-Boltzman constant, and $`\xi _{IR}`$ is the fraction of the central source covered by the innermost parts of a dusty molecular torus. For $`\mathrm{\Gamma }10`$ and typical 1 – 20 keV spectral index $`\alpha _X0.6`$ (Kii et al. 1992; Kubo et al. 1998), Comptonization of broad emission lines gives $$(L_\nu \nu )_{C(BEL)}6.4\times 10^{43}\frac{L_{BEL,45}}{(t_{fl}/1\mathrm{d})}\left(\frac{\mathrm{h}\nu }{1\mathrm{keV}}\right)^{0.4}\gamma _{min}^{1.2}L_{j,46}(\theta _j\mathrm{\Gamma })\mathrm{erg}\mathrm{s}^1,$$ (17) while Comptonization of near infrared radiation gives $$(L_\nu \nu )_{C(IR)}5.1\times 10^{42}\left(\frac{\xi _{IR}}{0.1}\right)\left(\frac{T_{IR}}{1000\mathrm{K}}\right)^4\left(\frac{t_{fl}}{1\mathrm{d}}\right)\left(\frac{\mathrm{h}\nu }{1\mathrm{keV}}\right)^{0.4}\gamma _{min}^{1.2}L_{j,46}\mathrm{erg}\mathrm{s}^1.$$ (18) Note that in the case of Comptonization of UV photons, radiation at 1 keV is produced by electrons which are only weakly relativistic, with $`\gamma 1`$ (see Eq. 13) and therefore this implies that $`\gamma _{min}`$ also needs to be $`1`$. In the case of Comptonization of near infrared radiation, $`\gamma _{min}50/\mathrm{\Gamma }`$. As one can see from Eqs. (17) and (18), Comptonization of external radiation by relativistic electrons in the proton-electron jets gives 1 keV luminosities which are $`100/L_{j,46}`$ times smaller than observed. Let us now determine whether the observed X–ray luminosities can be produced by the pure proton-electron jets via the synchrotron self-Compton (SSC) process, in addition to the ERC process responsible for the hard $`\gamma `$–ray emission. Luminosity of the SSC radiation can be estimated using the formula $$(L_\nu \nu )_{SSC}\mathrm{\Gamma }^4(L_\nu ^{}\nu ^{})_{SSC}\mathrm{\Gamma }^4(\frac{1}{2}N_\gamma \gamma )m_ec^2\left|\frac{d\gamma }{dt^{}}\right|_{SSC}$$ $$\frac{2\sigma _T}{3\pi m_pc^3}\frac{L_{syn}L_j}{a\mathrm{\Gamma }^2}\alpha _X\gamma _{min}^{2\alpha _X}\gamma ^{2(1\alpha _X)},$$ (19) where $$\left|\frac{d\gamma }{dt^{}}\right|_{SSC}=\frac{4c\sigma _T}{3m_ec^2}u_{syn}^{}\gamma ^2,$$ (20) $$u_{syn}^{}\frac{L_{syn}}{2\pi ca^2\mathrm{\Gamma }^4},$$ (21) and $$\nu (4/3)\gamma ^2\nu _{syn,m},$$ (22) where $`h\nu _{syn,m}0.1`$ eV is the typical location of the synchrotron spectrum peak in OVV quasars (cf. Fossati et al. 1998). As it is apparent from Eq. (22), production of 1keV radiation by SSC process involves electrons with $`\gamma 100`$. Therefore, $`\gamma _{min}`$ is not restricted to such low values as in the case of the ERC processes, and, in principle, for $`\gamma _{min}100`$, the SSC model can reproduce the observed soft X–ray luminosities: $$(L_\nu \nu )_{SSC}7.3\times 10^{45}\frac{L_{syn,47}L_{j,46}}{(t_{fl}/1d)(\theta \mathrm{\Gamma })}\left(\frac{\mathrm{h}\nu }{1\mathrm{keV}}\right)^{0.4}(\gamma _{min}/100)^{1.2}\mathrm{erg}\mathrm{s}^1,$$ (23) where as before we used $`\mathrm{\Gamma }=10`$ and $`\alpha _X=0.6`$. However, since the electrons which produce X–ray spectra via the SSC process have the same energy range as those electrons which produce $`\gamma `$–rays above 1 MeV, the spectral slopes of both should be similar. The observations show that this is not the case; the $`\gamma `$–ray spectra above 1 MeV are much steeper than the X–ray spectra in the 1 – 20 keV range, typically by $`\mathrm{\Delta }\alpha 0.5`$ (Pohl et al. 1997). This contradiction can be eliminated by assuming that most of the electrons are injected with $`\gamma 500`$. The X–rays are then produced by electrons which reach energies appropriate for X–ray production ($`100<\gamma <500`$ for 1 – 25 keV) by radiative energy losses and the resulting slope is $`\alpha _X0.5`$ (Ghisellini et al. 1998; Mukherjee et al. 1999). Summarizing this section we conclude that: $``$ Production of hard X–ray spectra by ERC process requires $`\gamma _{min}<10`$ and the pair to proton number ratio $$\frac{n_{pairs}}{n_p}50\frac{L_{SX,46}}{L_{j,46}}.$$ (24) where $`L_{SX}10^{46}`$ erg s<sup>-1</sup> is the typical luminosity observed in OVV quasars around 1 keV; $``$ Hard X–ray spectra can be produced by pure proton-electron jets via the SSC process, but this requires extremely high values of minimum electron injection energies. ## 4 PAIR PRODUCTION AND VARIABILITY We propose a scenario where jets are launched as proton-electron outflows in the innermost parts of the accretion flow and are loaded by pairs due to interactions with hard X–rays / soft $`\gamma `$–rays produced in the hot accretion disc coronae. This can well occur via a two-step process. The first step is Compton boosting of coronal photons (with initial energy of 100 – 300 keV) up to few MeV by cold electrons in the outflow propagating through the central region (Begelman & Sikora 1987). Provided that luminosity of the coronal radiation at $`>100`$ keV is $`L_{s\gamma }10^{46}`$ erg s<sup>-1</sup>, as can be deduced from extrapolation of $`210`$ keV spectra observed in non-OVV radio-loud quasars (see, e.g., Cappi et al. 1997; Xu et al. 1999), one can find that opacity for the above interactions is very high, $$\tau _{e\gamma }n_xr_{corona}\sigma _T15\frac{(L_{s\gamma }/10^{46}\mathrm{erg}\mathrm{s}^1)}{(h\nu /200\mathrm{keV})(r_{corona}/3\times 10^{15}\mathrm{cm})}.$$ (25) This means that each electron in the inner parts of the outflow (essentially forming a “proto-jet”) produces on the order of 10 or more 1 – 3 MeV photons. The second step is the absorption of MeV photons by the coronal (100 – 300 keV) photons in the pair creation process. The pairs created in such a manner are dragged by the jet, but before leaving the compact coronal radiation field, they produce a second generation of MeV photons, and they in turn produce next generation of pairs. Such pair cascade can continue until the time when the proto-jet becomes opaque for coronal radiation, i.e., when $`n_er_{corona}\sigma _T1`$. Within this limit, the electron/positron flux integrated over the cross-section of the proto-jet can reach the value $$\dot{N}_en_ec\mathrm{\Omega }_ir_{corona}^2\frac{c\mathrm{\Omega }_ir_{corona}}{\sigma _T},$$ (26) where $`\mathrm{\Omega }_i`$ is the initial solid angle of the outflow. Comparing this electron/positron flux with the total proton flux $$\dot{N}_p\frac{L_j}{\mathrm{\Gamma }m_pc^2},$$ (27) we find that proton-electron winds can be loaded by pairs in the central compact X–ray source up to the value $$\frac{n_{pairs}}{n_p}\frac{\dot{N}_e/2}{\dot{N}_p}\frac{m_pc^3}{2\sigma _T}\frac{r_{corona}\mathrm{\Omega }_i\mathrm{\Gamma }}{L_j}30\frac{r_{corona}}{3\times 10^{15}\mathrm{cm}}\frac{\mathrm{\Omega }_i}{L_{j,46}}\frac{\mathrm{\Gamma }}{3}.$$ (28) This corresponds roughly to the pair content given by Eq. (24), provided that $`\mathrm{\Omega }_i`$ is not very small. Note that a large initial opening angle of the central outflow is expected, as jets with $`L_j10^{46}`$ erg s<sup>-1</sup> carry too much momentum to be effectively collimated by the innermost parts of the accretion disc corona. Due to radial quasi-expansion, the outflow is rapidly diluted and can be collimated to the narrow jet by disc winds at $`r>100`$ gravitational radii (cf. Begelman 1995). It should be mentioned here that loading of quasar jets by pairs via absorption of $`\gamma `$–rays produced within the jet by external radiation field has been also proposed by Blandford and Levinson (1995) (BL95). However, their scenario is very different from ours in many respects. In the BL95 model, both pairs and nonthermal radiation are produced over several decades of distance; in our model, pair production is taking place at the base of a jet, whereas nonthermal radiation is produced at distances $`10^{17}`$$`10^{18}`$ cm. The BL95 model involves relativistic electrons, and pairs are produced by absorption of nonthermal radiation extending up to GeV energies, while our pair loading scenario involves cold electrons (as measured in the jet comoving frame), and pairs are produced by absorption of photons with energy 1 – 3 MeV. In their scenario, Comptonization of the UV bump by relativistic pair cascades leads to a production of a power-law X–ray spectrum which is softer than that observed in OVV quasars; in our scenario — in the region where pairs are injected, i.e., at the base of a jet — the UV bump is Comptonized only by cold pairs, and this leads to a production of radiation only around $`h\nu _{UV}(\mathrm{\Gamma }/3)^2100`$ eV. Due to the wide opening angle of a jet at its base, this radiation is much less collimated than nonthermal radiation produced at larger distances, and therefore in OVV quasars, it may be relatively inconspicuous. The $`100`$ eV excess can be detectable eventually in steep spectrum radio loud quasars, which have jets pointing further away from our line of sight. However, due to absorption by the ISM in the host, or our own Galaxy, this excess is predicted to be weak, and difficult to detect. Another attractive feature of our model is that the pair loading of a jet via interactions of the proto-jet with the hard X–rays / soft $`\gamma `$–rays produced by accretion disk coronae can be responsible for fast ($``$ day) variability observed in OVV quasars. We know from observations that the (presumably isotropic) X–ray emission from Seyfert galaxies – and thus, by analogy, the non-jet, isotropic component in radio loud quasars – is rapidly variable (although in OVV quasars, this component is “swamped” by stronger, relativistically boosted flux). This suggests that the corona is likely to have dynamical character: it may be powered by magnetic flares, or else by a possible instability of the innermost region of the accretion disk. In either case, the jet is expected to be loaded by pairs non-uniformly and non-axisymmetrically. The patches of the local pair excesses in a jet suffer large radiation drag (Sikora et al. 1996) and are forced to move slower than the surrounding gas. Therefore, they provide natural sites for shock formation and particle acceleration. While the above mechanism is viable as an explanation of rapid X–ray and $`\gamma `$–ray variability observed in OVV quasars, we note that pair density variations, as modulated by magnetic flares in the disk, are too rapid to produce variability of the radio flux. The long-term (months to years) variability in OVV quasars observed in all spectral bands including radio, are more likely to result from modulation of the variable flux of protons. Such modulation can be induced by the variability of the accretion rate in the inner parts of the accretion disc. The observed long term optical variability in both radio-loud and radio-quiet quasars supports this view (see, e.g., Giveon et al. 1999 and references therein). ## 5 SUMMARY $``$ Models of quasar jets consisting purely of e<sup>+</sup>e<sup>-</sup> pairs can be excluded because they predict much larger soft X–ray luminosities than observed in OVV quasars. On the other hand, models with jets consisting solely of proton-electron plasma jets are excluded, as they predict much weaker nonthermal X–ray radiation than observed in OVV quasars. Spectra of nonthermal flares in those objects can be explained in terms of a simple homogeneous ERC model, provided that the number of pairs per proton reaches values $`50(L_{SX}/L_j)`$. $``$ We suggest that initially, jets consist mainly of proton-electron plasma (where the protons provide the inertia to account for the kinetic luminosity of the jet), and subsequently are loaded by e<sup>+</sup>e<sup>-</sup> pairs by interactions with hard X–rays / soft $`\gamma `$–rays from hot accretion disc coronae. This requires that the coronal temperatures reach values $`100`$ keV, which are consistent with observations of Seyfert galaxies, with spectra uncontaminated by relativistic jets. $``$ Non-steady and non-axisymmetric pair loading of jets by X–rays from magnetic flares in the corona can be responsible for short term ($``$ day) variability observed in OVV quasars. It should be emphasized here that alternative mechanisms of variability, such as modulation of the total energy flux in a jet by accretion rate or precession of a jet (cf. Gopal-Krishna & Wiita 1992) cannot operate on such short time scales. The lack of rapid (time scale of $``$ day), high amplitude variability in the UV band of radio lobe dominated quasars supports this view. $``$ The dissipative sites in quasar jets, where electrons/positrons are accelerated and produce nonthermal flares observed in OVV quasars, can be provided by shocks produced by collisions between inhomogeneities induced by non-uniform pair loading of the proto-jets. These shocks (and therefore particle acceleration) can be amplified eventually at a distance 0.1 – 1 pc due to reconfinement of a jet by the external gas pressure (Komissarov & Falle 1997; Nartallo et al. 1998). We are grateful to Annalisa Celotti, the referee, whose thoughtful comments helped us to substantially improve the paper. M.S. thanks Annalisa Celotti and Paolo Coppi for stimulating discussions, and, in particular, for pointing out that the pair-loading process in proto-jets can saturate. We would like to thank the Institute for Theoretical Physics at U.C. Santa Barbara for its hospitality. This project was partially supported by ITP/NSF grant PHY94-07194, NASA grants and contracts to University of Maryland and USRA, and the Polish KBN grant 2P03D 00415.
no-problem/9912/astro-ph9912247.html
ar5iv
text
# The detection of linear polarization in the afterglow of GRB 990510 and its theoretical implications ## Introduction Polarization is one of the clearest signatures of synchrotron radiation, if this is produced by electrons gyrating in a magnetic field that is at least in part ordered. For this reason, polarization measurements can provide a crucial test of the synchrotron shock model mes97 , the leading scenario for the production of the burst and, in particular, the afterglow photons. Attempts to measure the degree of linear polarization yelded only an upper limit ($`2.3\%`$ for GRB 990123hjo99 ), until the observations on the afterglow of the burst of May 10, 1999. A small but significant amount of polarization was detected ($`1.7\pm 0.2\%`$cov99 ) $`18`$ hours after the BATSE trigger and confirmed in a subsequent observation two hours laterwij99 . Even if synchrotron radiation can naturally account for the presence of linearly polarized light in a GRB afterglow, a significant degree of anisotropy in the magnetic field configuration or in the fireball geometry is required. If, in fact, the synchrotron emission is produced in a fully symmetrical set-up, all the polarization components average out giving a net unpolarized flux. The presence of partially ordered magnetic field (in causally disconnected domains) has been discussed by Gruzinov & Waxman gru99 , however their model overpredicts, in its simplest formulation, the observed amount of polarization. Here we discuss a different possibility, in which the asymmetry is provided by a collimated fireball observed off–axis, while the magnetic field is tangled in the plane perpendicular to the velocity vector of the fireball expansion. Indeed, the smooth break in the lightcurve of GRB 990510 isr99 has been interpreted as due to a collimated fireball observed slightly off–axis. ## GRB 990510 measurements GRB 990510 was detected by BATSE on-board the Compton Gamma Ray Observatory and by the BeppoSAX Gamma Ray Burst Monitor and Wide Field Camera on 1999 May 10.36743 UT kip99 ; dad99 . Its fluence (2.5$`\times 10^5`$ erg cm<sup>-2</sup> above 20 keV) was relatively high kip99 . Follow up optical observations started $`3.5`$ hr later and revealed an $`R17.5`$ axe99 optical transient (OT). The OT showed initially a fairly slow flux decay $`F_\nu t^{0.85}`$ isr99 , which gradually steepened; Vreeswijk et al. vre99 detected Fe II and Mg II absorption lines in the optical spectrum of the afterglow. This provides a lower limit of $`z=1.619\pm 0.002`$ to the redshift, and a $`\gamma `$–ray energy of $`>10^{53}`$ erg, in the case of isotropic emission. We observed the OT associated with GRB 990510 $`18`$ hours after the gamma–ray trigger at the ESO VLT-Antu (UT1) in polarimetric mode, performing four 10 minutes exposures in the R band at four angles ($`0^{}`$, $`22.5^{}`$, $`45^{}`$ and $`67.5^{}`$) of the retarder platecov99 . The average magnitude of the OT in the four exposures was $`R19.1`$. Relative photometry with respect to all the stars in the field was performed and each couple of simultaneous measurements at orthogonal angles was used to compute the points in Fig. 1 (laft panel) (seecov99 for details). The parameter $`S(\varphi )`$ is related to the degree of linear polarization $`P`$ and to the position angle of the electric field vector $`\vartheta `$ by: $$S(\varphi )=P\mathrm{cos}2(\vartheta \varphi ).$$ (1) $`P`$ and $`\vartheta `$ are evaluated by fitting a cosine curve to the observed values of $`S(\varphi )`$. The derived linear polarization of the OT of GRB 990510 is $`P=(1.7\pm 0.2)`$% (1$`\sigma `$ error), at a position angle of $`\vartheta =101^{}\pm 3^{}`$. Fig. 1 (left panel) shows the data points and the best fit $`\mathrm{cos}\varphi `$ curve. The statistical significance of this measurement is very high. A potential problem is represented by a “spurious” polarization introduced by dust grains interposed along the line of sight, which may be preferentially aligned in one direction. The normalization of the OT measurements to the stars in the field already corrects for the average interstellar polarization of these stars, even if this does not necessarily account for all the effects of the galactic ISM along the line of sight to the OT (e.g. the ISM could be more distant than the stars, not inducing any polarization of their light). To check this possibility, we plot in Fig. 1 (right panel) the degree of polarization vs. the instrumental position angle for each star and for the OT. It is apparent that, while the position angle of all stars are consistent with being the same (within 10 degrees), the OT clearly stands out. The polarization position angle of stars close to the OT differs by $`45^{}`$ from the position angle of the OT. This is contrary to what one would expect if the polarization of the OT were due to the galactic ISM. Polarization induced by absorption in the host galaxy can be constrained to be $`P_{host}<0.2\%`$, due to the lack of any absorption in the optical filters in addition to the local value (seecov99 for more details). We therefore conclude that the OT, even if contaminated by interstellar polarization, must be intrinsically polarized to give the observed orientation. ## Polarization from beamed fireballs We consider a slab of magnetized plasma, in which the configuration of the magnetic field is completely tangled if the slab is observed face on, while it has some some degree of alignment if the slab is observed edge on. Such a field can be produced by compression in one direction of a volume of 3D tangled magnetic fieldlai80 or by Weibel instabilitymed99 . If the slab is observed edge–on, the radiation is therefore polarized at a level, $`P_0`$, which depends on the degree of order of the field in the plane. If the emitting slab moves in the direction normal to its plane with a bulk Lorentz factor $`\mathrm{\Gamma }`$, we have to take into account the relativistic aberration of photons. This effect causes photons emitted at $`\theta ^{}=\pi /2`$ in the (primed) comoving frame $`K^{}`$ to be observed at $`\theta 1/\mathrm{\Gamma }`$ (see alsomed99 ). We assume that the fireball is collimated into a cone of semi–aperture angle $`\theta _c`$, and that the line of sight makes an angle $`\theta _o`$ with the jet axis. As long as $`\mathrm{\Gamma }>1/(\theta _c\theta _o)`$, the observer receives photons from a circle of semi-aperture angle $`1/\mathrm{\Gamma }`$ around $`\theta _o`$. Consider the edge of this circle: radiation coming from each sector is highly polarized, with the electric field oscillating in radial direction (seeghi99 for more details). As long as we observe the entire circle, the configuration is symmetrical, making the total polarization to vanish. However, if the observer does not see part of the circle, some net polarization survives in the observed radiation. This happens if a beamed fireball is observed off–axis when $`1/(\theta _c+\theta _o)<\mathrm{\Gamma }<1/(\theta _c\theta _o)`$. At the beginning of the afterglow, when $`\mathrm{\Gamma }`$ is large, the observer sees only a small fraction of the fireball and no polarization is observed. At later times, when $`\mathrm{\Gamma }`$ becomes smaller than $`1/(\theta _c\theta _o)`$, the observer will see only part of the circle centered in $`\theta _o`$: there is then an asymmetry, and a corresponding net polarized flux. To understand why the polarization angle in this configuration is horizontal, consider that the part of the circle which is not observed would have contributed to the polarization in the vertical direction. At later times, as the fireball slows down even more, a larger area becomes visible. When $`\mathrm{\Gamma }1/(\theta _c+\theta _o)`$, the dominant contribution to the flux comes from the upper regions of the fireball which are vertically polarized. The change of the position angle happens when the contributions from horizontal and vertical polarization are equal, resulting in a vanishing net polarization. At still later times, when $`\mathrm{\Gamma }1`$, light aberration vanishes, the observed magnetic field is completely tangled and the polarization disappears. Figure 2 shows the result of the numerical integration of the appropriate equations (seeghi99 for the detailed discussion). As derived in the above qualitative discussion, the lightcurve of the polarized fraction shows two maxima, with the position angle rotated by 90 between them. It is interesting to note the link with the lightcurve. The upper panel of Fig. 2 shows the lightcurve of the total flux divided by the same lightcurve in the assumption of spherical geometry. As expected, the lightcurve of the beamed fireball shows a break with respect to the spherical one. A larger off–axis ratio produce a more gentle break in the lightcurve, and is associated with a larger value of the polarized fraction. The behaviour of the total flux and of the polarization lightcurves allow us to constrain the off–axis ratio $`\vartheta _o/\vartheta _c`$, but is insensitive to the absolute value of the beaming angle $`\vartheta _c`$. Therefore, even if we could densely sample the polarization lightcurve, the beaming angle could be derived only assuming a density for the interstellar medium, i.e. a relation between the observed time and the braking law of the fireball. On the other hand, the detection of a $`90^{}`$ rotation of the polarization angle of the afterglow would be the clearest sign of beaming of the fireball, expecially if associated with a smooth break in the lightcurve. Polarimetric follow up of afterglows is hence a powerful tool to investigate the geometry of fireballs.
no-problem/9912/astro-ph9912325.html
ar5iv
text
# A Search for OH Megamasers at 𝑧>0.1: Preliminary Results ## 1. Introduction OH megamasers (OHMs) are found in luminous infrared galaxies, and strongly favor ultraluminous infrared galaxies (ULIRGs). Since photometric surveys have demonstrated that nearly all ULIRGs are the products of major galaxy mergers (Sanders, Surace, & Ishida 1999), OHMs offer a promising means to detect merging galaxies across the greater part of the history of the universe ($`0<z<3`$). The galaxy merger rate plays a central role in the process of galaxy evolution, and must be empirically determined in order disentangle galaxy number evolution from luminosity evolution (Le Févre et al. 1999). There are a number of observable merger products which surveys can use to measure the galaxy merger rate, but each has its limitations. The most successful methods used to date are optical surveys for the morphological signatures of mergers such as tidal tails, rings, shells, and filaments. Le Févre et al. (1999) has used this method as well as close pair surveys to determine the merger fraction of galaxies up to $`z=0.91`$. Optical surveys, however, require high angular resolution, suffer from dust extinction effects, are biased by the optical brightening which occurs during major mergers, and may exclude advanced mergers. Other survey methods capitalize on the very effects which bias optical surveys. FIR surveys for ULIRGs can use infrared brightening to identify mergers, and this technique can be extended down to the sub-mm regime where several groups have detected sources which may be the high-redshift counterparts of local ULIRGs (Smail et al. 1999). Molecular emission lines are also enhanced by mergers, and some of these lines can mase, such as H<sub>2</sub>O or OH hyperfine transitions. OHMs, in particular, appear to be quite common in ULIRGs: roughly 1 in 6 ULIRGs are observed to host OHMs (Darling & Giovanelli 1999) — it may be that all ULIRGs host OHMs, but the beaming of OHM emission determines the observable fraction. Although the most distant OHM detected to date has $`z=0.265`$ (Baan et al. 1992), the most luminous OHMs should be detectable up to $`z3`$ with current instruments (Briggs 1998). Understanding the relationship between the OHM and ULIRG luminosity functions at low redshift should allow one to measure, via OHM surveys, the merger rate of galaxies across a significant fraction of the epoch of galaxy evolution. One goal of this survey is to carefully quantify the OHM fraction in ULIRGs at $`0.1<z<0.2`$. ## 2. Survey Sample and Preliminary Results OHM candidates were selected from the IRAS Point Source Catalog redshift survey (PSCz; W. Saunders 1999, private communication) following the criteria: (1) IRAS 60 $`\mu `$m detection, (2) $`0.1<z<0.45`$, and (3) $`0^{}<\delta <37^{}`$ (the Arecibo sky). The lower limit on redshift is set to avoid local radio frequency interference (RFI), and the upper limit is set by the bandpass of the L-band receiver at Arecibo. These candidate selection criteria limit the number of candidates in the PSCz to 296. The lower bound on redshift and the requirement of IRAS detection at 60 $`\mu `$m select candidates which are (U)LIRGs. From a set of 69 candidate OHMs which were observed at Arecibo, 11 new OHMs and 1 new OH absorber have been detected. We can place upper limits on the OH emission from 53 of the non-detections, while the remaining 4 candidates remain ambiguous due to strong RFI or a strong radio continuum which produced standing waves in the bandpass and confounded line detection. Spectra and observed and derived parameters of the new detections are reported in Darling & Giovanelli (1999). The new OHMs span a wide range of spectral shapes, luminosities, masing conditions, and host properties. The diversity of the OHM spectra represent a corresponding diversity of physical conditions in the masing regions, possibly ranging from spatially extended ($`>100`$ pc) unsaturated emission associated with starbursts to small-scale ($`<1`$ pc) saturated emission associated with AGN. Figure 1 depicts the observed sample in a FIR color-luminosity plot. The distribution indicates that OHMs strongly favor the most FIR-luminous hosts, and have a lesser tendency to also favor “warmer” hosts. Selection effects influence these trends, particularly Malmquist bias. The non-detections in the lowest $`L_{FIR}`$ range tend to have the least confident OH non-detection thresholds. We estimate from the Malmquist bias-corrected $`L_{FIR}`$-$`L_{OH}`$ relation calculated by Kandalian (1996) that there are perhaps 4 additional OHMs lurking among the non-detections of this sample. This estimate is highly uncertain due to the statistics of small numbers, the uncertainty in the $`L_{FIR}`$-$`L_{OH}`$ relation, and the scatter of the known OHMs about the relation. ## 3. Expectations and Conclusions Survey work performed to date shows an OHM detection rate of 1 in 6, which indicates that we can expect to discover about 50 new OHMs in the complete survey. This figure doubles the OHM sample and increases the $`z>0.1`$ sample by a factor of six. Based on the set of previously reported OHMs, we can also expect to discover a few OH “gigamasers” ($`L_{OH}>10^4L_{}`$). There are two of these extremely luminous OHMs known today. Detection of more OH gigamasers will solidify confidence in upper end of the OHM luminosity function and will lend merit to the notion that OH gigamasers should be observable up to $`z3`$ with current instruments (Briggs 1998). The existence of OH gigamasers is crucial to the study of the galaxy merger rate at high redshifts. This survey will provide the first uniform, flux-limited sample of OHMs, finally making it possible to empirically explore the physics of OHM phenomena in a statistically meaningful manner, and to evaluate theoretical models. The uniform sample will also offer the first reliable measure of the incidence of OHMs in ULIRGs as a function of host properties, especially $`L_{FIR}`$. Once the OHM fraction in ULIRGs is quantified at low redshifts, one can perform OHM surveys at higher redshifts to measure the luminosity function of ULIRGs — and hence the merger rate of galaxies — at arbitrary redshifts. The current survey will be able to measure the galaxy merger rate up to $`z=0.25`$. Our preliminary result demonstrates the high OHM detection rate achievable with the upgraded Arecibo telescope in short integration times. The strong dependence of OHM fraction in ULIRGs on $`L_{FIR}`$ and the selection of the most FIR-luminous LIRGs (via the criterion $`z>0.1`$) produce a high detection rate compared to previous OHM surveys (e.g., Staveley-Smith et al. 1992; Baan, Haschick, & Christian 1992). By extrapolation, we predict that most hyperluminous IR galaxies host detectable OHMs, and that an OHM survey of this class of LIRG would be highly successful. The main barrier to this type of survey is the paucity of detectors at the redshifted frequency of OH in the range $`0.5<z<1.5`$. ALMA will resolve the molecular cloud structures in galactic nuclei which produce OHMs and provide an unprecedented understanding of the dynamics, density field, and composition of these regions. Tying molecular studies of low redshift galactic nuclei to the corresponding observed OHM properties will allow us to extend that local understanding to a physical study of OHMs at high redshift. ### Acknowledgments. The authors are very grateful to Will Saunders for access to the PSCz catalog and to the staff of NAIC for observing assistance and support. This research was supported by Space Science Institute archival grant 8373 and made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. ## References Baan, W. A., Rhoads, J., Fisher, K., Altschuler, D. R., & Haschick, A. 1992, ApJ, 396, L99 Baan, W. A., Haschick, A., & Christian, H. 1992, AJ, 103, 728 Briggs, F. H. 1998, A&A, 336, 815 Darling, J. & Giovanelli, R. 1999, AJ, submitted Kandalian, R. A. 1996, Astrophysics, 39, 237 Le Févre, O., et al. 1999, preprint (astro-ph/9909211) Sanders, D. B., Surace, J. A., & Ishida, C. M. 1999, preprint (astro-ph/9909114) Smail, I., Ivison, R. J., Owen, F. N., Blain, A. W., & Kneib, J.-P. 1999, preprint (astro-ph/9907083) Staveley-Smith, L., Norris, R. P., Chapman, J. M., Allen, D. A., Whiteoak, J. B., & Roy, A. L. 1992, MNRAS, 258, 725
no-problem/9912/astro-ph9912517.html
ar5iv
text
# Constraints on Interstellar Plasma Turbulence Spectrum from Pulsar Observations at the Ooty Radio Telescope ## 1. Introduction Refractive Interstellar Scintillation (RISS) effects on pulsar signals are powerful techniques for discriminating between different models that have been proposed for the power spectrum of plasma density fluctuations in the Interstellar Medium (ISM; e.g. Rickett 1990). The nature of the spectrum is considered to be a major input for understanding the underlying mechanism of interstellar plasma turbulence. Data from our long-term pulsar scintillation observations using the Ooty Radio Telescope (ORT) at 327 MHz are used to investigate the nature of the spectrum in the Local Interstellar Medium (LISM; region within $``$ 1 kpc of the Sun). Dynamic scintillation spectra were obtained for 18 pulsars in the DM range 3$``$35 $`\mathrm{pc}\mathrm{cm}^3`$ at $``$10$``$100 epochs spanning $``$100$``$1000 days during 1993$``$1995 (Bhat et al. 1999). From these observations, various scintillation properties and the ISM parameters are estimated with accuracies much better than that which has been possible from most earlier data. The time series of parameters, $`viz.`$, decorrelation bandwidth ($`\nu _d`$), scintillation time scale ($`\tau _d`$) and the drift slope of intensity scintillation patterns, and pulsar flux density are used to study various observable effects of Interstellar Scintillation, based on which the spectral form is inferred over the spatial scale range $`10^7`$ m to $`10^{13}`$ m. ## 2. Results and Conclusions The main results from the present study are: 1. Observations show large-amplitude modulations of the diffractive scintillation observables ($`\nu _d`$and $`\tau _d`$) and flux. The measured depths of modulations are considerably larger than the predictions of models based on a thin-screen scattering geometry and a simple Kolmogorov form of density spectrum. 2. The statistical properties of diffractive and refractive scattering angles are used to obtain precise estimates of the spectral slope ($`\beta `$) over the spatial scale range $`10^7`$ m to $`10^{11}`$ m. While for 12 pulsars, $`\beta `$ is found to be consistent with the Kolmogorov index (at $`\pm `$ 2-$`\sigma `$ levels), it is found to be significantly larger for 6 pulsars ($`11/3<\beta <4`$). 3. From the anomalous scintillation behaviour of persistent drift slopes lasting over many months, we infer the power levels at spatial scales $`10^{12}10^{13}`$ m ($`i.e.,`$ 1$``$2 orders of magnitude larger than refractive scales) to be 2$``$3 orders of magnitude larger than that expected from a Kolmogorov form of spectrum (Figure 1). Although, there are various methods of investigating the nature of the spectrum, measurements based on a particular method give similar implications for the spectral form. A careful consideration of all the available results from the literature and our observations leads to the picture of a Kolmogorov-like spectrum ($`\beta 11/3`$) in the spatial scale range $`10^6`$ m to $`10^{11}`$ m, with a low wavenumber enhancement (at $`\kappa 10^{12}10^{13}`$ $`\mathrm{m}^1`$ ). This is not quite in agreement with the results from some of the recent literature, where the spectrum is suggested to be an approximately power-law over the wide range $`10^8`$ m to $`10^{13}`$ m (cf. Armstrong et al. 1995). Further, for several nearby pulsars (distance $``$ 200$``$700 pc), the spectrum is found to be somewhat steeper ($`11/3<\beta <4`$), and there is a weak, systematic trend for a decrease in the spectral slope ($`\beta `$) with distance. A more complete description of this work can be found in Bhat N.D.R., Gupta Y., Rao A.P. 1999, ApJ, 514, 249. ## References Armstrong J.W., Rickett B.J., Spangler S.R. 1995, ApJ, 443, 209 Bhat N.D.R., Rao A.P., Gupta Y. 1999, ApJS, 121, 483 Rickett B.J. 1990, ARA&A, 28, 561
no-problem/9912/astro-ph9912482.html
ar5iv
text
# Conference Impression ## 1. Introduction These conference proceedings are the condensed result of the 2nd meeting on Asymmetrical PNe, held at MIT during August of 1999. Being the last such meeting of this millennium, and a successor of the successful 1st asymmetrical meeting held at Oranim in Israel during August 1994, and published as Anns.Isr.Phys.Soc.,11 in 1995, it was especially good to see that the subject has made significant progress over the last lustrum. The main impact, as you can see immediately by just a quick skim through this volume, has come from the imaging done by the Hubble Space Telescope. The wealth of complex detail revealed by these unprecedented images has left the theorists reeling, the observers gasping, and both groups with plenty ground–based work ahead of them. A special highlight of the meeting was the free hand–out to participants of a CD with superb WFPC2 images by Hajian & Terzian! Note that references to papers in this volume are from here on indicated by an asterisk next to the authors name. ## 2. Overall impression My overall impression from this meeting is that the observed morphologies of PNe are shown to be more and more complex, mainly due to the images from HST. The circle with a dot in the middle of 15 years ago is no more! The impact of the increased spatial resolution data available from the space observations is extraordinary! Archival research with HST data is now becoming a important tool for PNe research, especially in combination with ground–based high resolution spectroscopy. To date there are about 125 HST images of PNe available. Building on the basis created by the work of Josef Solf in the early eighties (e.g. Solf 1983, 1984), the spatial resolution and spectroscopic separation at the sub–pixel level of small features is now coming of age, and is teaching us a lot about the formation of PNe. All this detail, however, is clearly posing a problem for the theorists: the complexity is enormous and at present theory understandably lags the observations. Having said this, theory has also made progress; several models are getting better at producing the observed gross shapes of nebulae (Frank, García–Segura, Livio, Soker and others (\*)), and attempts at explaining small scale structure (Dwarkadas(\*)) are being made. The overall theoretical picture, however, is still quite elusive. Mass loading obviously plays an important role in some objects (e.g. globules in the Ring nebula (Dyson(\*)), and perhaps also in NGC6369, and NGC6751 just from their complex morphology, and the theory for this phenomenon is well developed. Point–symmetry has become very popular and it seems that there are hardly any asymmetrical PNe left that do not have some kind of point–symmetry. I recall that at the meeting in La Serena (Mass Loss on the AGB and Beyond) held in 1992, the concept was hardly known. The use of movies to show time dependent behaviour of data is also on the increase with several presentations both of observational and theoretical material at the meeting, giving some interesting new ways of visualising data. The movie of M2–9’s inner nebula (Balick(\*)) especially struck me, since it showed the nebula’s rotation, which would have got lost had the data been presented any other way. To “see” the ISM penetrating a PN (Villaver(\*)) and thus creating an asymmetry was also spectacular. Internet access to this kind of movie is also an excellent way of showing the community what is happening. ## 3. Terminology As a result of the increasing complexity of the observed morphologies, the terminology has recently also proliferated. Table 1 lists the terms I heard at this meeting to describe features in PNe. They range from the practical to the sublime, and even beyond… “Paradigm” is definitely in, the previously much over-used “scenario” is out; “Weather” (which you ignore…) is in; “Physics” (…which you don’t) is out. Other much used key words were: precession, point–symmetry, BRETS, episodic or periodic outflows, irregular, mass loading. Finally, a nice piece of nomenclature was reportedly found in the address of a famous Observatory, the “Navel Absorbatory”. It is clear that astronomers are also influenced by ”fashions” or ”trends”. Please start a trend and use the word “polarigenic” (coined by yours truly about 15 yrs ago), as in ”polarigenic mechanism” for something that produces polarisation. ## 4. Observational appearance of bipolar and point–symmetric PNe. Figure 1 shows the possible appearances of bipolar and point–symmetric nebulae as projected onto the plane of the sky. Under certain circumstances blue and red shifted components can be located on the same side of the object, even though this is, at first sight, counter intuitive. For objects with a precession cone cut by the line of sight or plane of the sky this is observed (e.g. IC4634) and such objects are point–symmetric. M2–9 is a bipolar that shows plane symmetry in it’s inner nebula, and point–symmetry in it’s outer lobes which are also both red shifted since they are reflecting light from the central object. So far this is unique among PNe. Nebulae are put into the following classes: Bipolars: NO (near, or not too far from plane of sky), EO (nearly end–on), RR (both lobes red shifted) Point–symmetricals: P1 ( 0 $``$ i $``$ 0.5$`\varphi `$) , P2 (0.5$`\varphi `$ $``$ i $``$ 90-0.5$`\varphi `$), P3 (90-0.5$`\varphi `$ $``$ i $``$ 90), where $`\varphi `$ the precession cone opening angle, and i is the inclination angle to the plane of the sky of the nebula. ## 5. Future work Some lines of research are promising and should be continued in the near future. Here I mention a few, based on what I heard and saw at the meeting, that I find of particular potential interest. ### 5.1. Binaries versus single stars The whole issue of what kind of mechanism drives the production of the most asymmetrical and physically different class of PNe, the bipolar nebulae (Corradi & Schwarz, 1995), depends critically on finding the binary central stars. There is strong circumstantial evidence for the presence of binaries in several bipolars, but it has so far been very difficult to obtain direct determinations of the numbers of bipolars and other PNe that contain binary central stars. A promising method is that of radial velocity determinations, especially in the infra–red, where obscuration is less of a problem. Bond (\*) has done pioneering imaging work with HST, which is rightly being continued. From the theorist’s point of view the single versus binary star model for bipolar nebulae is split into two parts: the making of a bipolar nebula starting with an asymmetrical density distribution, and the mechanism to produce that initial density distribution in the first place. To make a bipolar nebula with some outflow in an asymmetrical density distribution, is relatively easy and many models can do this. Particularly successful are the models of García–Segura (\*), a two wind interaction combined with magnetic field and near break–up rotation, and the work by Frank et al. (\*) based originally on work by Icke (1988) using the Kompaneets theory. The nebulae with the highest aspect ratios, such as M2–9, pose more of a problem as the required density gradient and other parameters become un–physically high. Here the binaries form a more natural system to generate a strong pole to equator density gradient, and their accretion disks can blow disk winds that are highly collimated and fast, explaining the high observed velocities, the strong collimation, and the large wings of the observed hydrogen line profiles. Excretion disks, as proposed by Morris (1987), can shape the slow wind from the primary star into two lobes, through which the fast wind then blows out HH–like objects as in He2-104. In single stars there is no identified mechanism to produce the necessary high underlying density gradient that shapes the wind, and a particular combination of strong magnetic fields, rotation, and equatorial mass loss has to be invoked to give asymmetrical nebulae at all. My overall impression is that binaries are needed for the most extreme bipolars and perhaps for all bipolars. Point–symmetry is most easily explained by precession; again something that occurs naturally in binaries. Also the combination in M2–9 of both point– and plane symmetry is difficult to explain without a binary central object, as is the presence of \[OIII\] lines and a low luminosity central object. I think that an object like He3–1475 (see the HST image on the CD handed out at the meeting) cannot be explained by any single star model! We must go and find those binaries! ### 5.2. PNe–ISM interactions There were several papers on the interactions of PNe with the ISM (Knill–Degani; theory, Kerber; observations, Villaver; modeling, with movie(\*)). This is clearly a very interesting asymmetry producing mechanism, especially affecting the outer parts of the PNe, and in some cases even resulting in stars being outside their own nebula. The ratio of density between a typical PN and the ISM is about 50 in the Galactic plane and 300 in the halo. Under certain circumstances the ISM can penetrate the PN, giving spectacular interactions, as shown in the presented movie. ### 5.3. Morphology in the HR diagramme The morphologies derived from HST images of a sample of PNe taken and to be taken in the Magellanic Clouds, can yield important relationships between morphology and the central star evolutionary stage. This work was started by using Galactic PNe, extracting statistical information about a sample of which narrow band images had been taken, and making a morphological classification of the nebulae. The result was a plot of their central stars in the HR diagram as a function of morphological class, yielding some intriguing correlations, especially for the bipolars (Stanghellini et al.1993). The large uncertainty in the individual distances to the PNe in this sample make this result only statistically interesting. By going to the Clouds, the distance uncertainty is gone and much better results can be obtained, again using HST. Stanghellini is undertaking this important work. ### 5.4. Polarimetry Polarimetry was meagrely represented at this conference, as at most meetings that do not specifically deal with the subject. Only one out of 55 talks and 2 out of 38 posters contained some item about polarimetry. Since polarimetry often can provide a missing and vital piece of information, this is a plea for all observers to use and be more aware of polarimetry. The fact that the title of this meeting contains the word ”asymmetrical” should already make people think about polarimetry, as it is uniquely suitable to detect asymmetries. Two examples of the importance of polarimetry are: Trammell et al. (1994) found that 77% of 31 AGB objects are intrinsically polarised, while Johnson and Jones (1991) found 74% of their sample to be polarised. There clearly is a high fraction of AGB stars with asymmetries. If there are no features in the polarisation spectrum, there has to be a global asymmetry in the object, and probably dust scattering; if there are spectral features, local asymmetries such as convection cells or other atmospheric features must be present. The whole critical question of the onset of asymmetry during late stellar evolution, whereby essentially spherical stars produce PNe of which many are asymmetrical, can be uniquely studied with polarimetry. The mechanism of dust scattering in the faint loops in M2-9 (Schwarz et al. 1997) was a supposition until the loops were found to be 60% polarised at right angles to the plane formed by the source, scatterer and observer. Such high degree of polarisation is not only easy to observe it also firms up the proposed dust scattering mechanism to a certainty. This then allowed the distance to M2–9 to be determined accurately from it’s expansion parallax. Polarimetry provided the key piece of information that allowed the physical parameters of this object to be found. Continued and increased work using polarisation measurements on PNe and AGB stars–such as that in the press (ApJ) by Weintraub, Kastner (who showed the data at this meeting), Sahai, & Hines, who used NICMOS to image AFGL2688–will surely yield important results! In summary, this was a meeting clearly dominated by the observers using HST imagery making theorist’s lives difficult. The old Chinese curse “May you live in interesting times!” is applicable here, and times will become even more interesting in the near future. Thanks for inviting me, Joel and Noam, and see you at the next asymmetrical meeting! ## References Corradi, R.L.M. & Schwarz, H.E. 1995, AA, 293, 871 Icke, V. 1988, AA, 202, 1771 Johnson, J.J. & Jones, T.J. 1991, AJ, 101, 1735 Morris, M. 1987, PASP, 95, 1115 Schwarz, H.E. et al. 1997, AA, 319, 267 Solf, J. 1983, ApJL, 266, L113 Solf, J. 1984, AA, 139, 296 Stanghellini, L., Corradi, R.L.M., Schwarz, H.E. 1993, AA, 279, 521 Trammell, S.R., Dinerstein, H.L., Goodrich, R.W. 1994 AJ, 108, 984
no-problem/9912/astro-ph9912281.html
ar5iv
text
# Discovery of two high-magnetic-field radio pulsars ## 1. Introduction Recently it has been suggested (Thompson & Duncan 1992, Kulkarni & Frail 1993, Vasisht & Gotthelf 1997, Kouveliotou et al. 1998) that, in addition to the radio pulsars, there exists a class of isolated rotating neutron stars with ultra-strong magnetic fields – the so-called “magnetars.” The observational properties of radio pulsars and putative magnetars are very different. Known radio pulsars, whose spin periods span the range from 0.0015–8.5 s, rarely have observable X-ray pulsations, and, in all cases, the X-ray power is much smaller than the spin-down luminosity. By contrast, the objects that have been suggested to be magnetars, namely soft gamma repeaters (SGRs) and anomalous X-ray pulsars (AXPs), exhibit pulsations with periods 5–12 s, and high energy emission that is many orders of magnitude stronger than their spin-down luminosity (Mereghetti & Stella 1995). Their pulsations have gone undetected at radio wavelengths. The dichotomy is thought to be a result of the much larger magnetic fields in magnetars, with magnetic field decay heating the neutron star to produce thermal X-ray emission (Thompson & Duncan 1993), or thermal emission from initial cooling enhanced by the large field (Heyl & Hernquist 1997). Here we report the discovery of two isolated radio pulsars, PSRs J1119$``$6127 and J1814$``$1744, which have the largest inferred surface magnetic fields yet seen among radio pulsars. The results reported here will be described in more detail by Camilo et al. (2000). ## 2. Observations and Results PSRs J1119$``$6127 and J1814$``$1744 were discovered as part of an ongoing survey of the Galactic Plane using the 64-m Parkes radio telescope (Lyne et al. 2000, see also D’Amico et al., these proceedings). PSR J1119$``$6127 has $`P=0.41`$ s and period derivative $`\dot{P}=4.0\times 10^{12}`$, the largest known among radio pulsars. We have also measured an apparently stationary period second derivative, $`\ddot{P}=4\times 10^{23}`$ s<sup>-1</sup>, making this only the third pulsar for which this has been possible through absolute pulse numbering. PSR J1814$``$1744 has $`P=4.0`$ s and $`\dot{P}=7.4\times 10^{13}`$. Spin and astrometric parameters for both pulsars are given in Table 1. From the standard equation for the dipolar surface magnetic field $`B=3.2\times 10^{19}(P\dot{P})^{1/2}`$ G, we infer surface magnetic fields of $`4.1\times 10^{13}`$ G and $`5.5\times 10^{13}`$ G for PSRs J1119$``$6127 and J1814$``$1744, respectively. These are the highest magnetic field strengths yet observed among radio pulsars. ## 3. Discussion Figure 1 is a plot of $`\dot{P}`$ versus $`P`$ for the radio pulsar population, with PSRs J1119$``$6127 and J1814$``$1744 indicated. Also shown are the sources usually identified as magnetars, namely the five AXPs and two SGRs for which $`P`$ and $`\dot{P}`$ have been measured. Most models of the radio emission physics depend on pair-production cascades above the magnetic poles and hence on the strength of the magnetic field. However, at field strengths near or above the quantum critical field, $`B_cm_e^2c^3/e\mathrm{}=4.4\times 10^{13}`$ G, the field at which the cyclotron energy is equal to the electron rest-mass energy, processes such as photon splitting may inhibit pair-producing cascades. It has therefore been argued (Baring & Harding 1998) that a radio-loud/radio-quiet boundary can be drawn on the $`P`$$`\dot{P}`$ diagram, with radio pulsars on one side, and AXPs and SGRs on the other (see Fig. 1). The existence of PSRs J1119$``$6127 and J1814$``$1744, however, demonstrates that radio emission can be produced in neutron stars with surface magnetic fields equal to or greater than $`B_c`$. Especially noteworthy is the proximity of PSR J1814$``$1744 to the cluster of AXPs and SGRs at the upper right corner of Figure 1. In particular, this pulsar has a nearly identical $`\dot{P}`$ to the AXP 1E 2259+586 (Fahlman & Gregory 1981, Baykal et al. 1996, Kaspi et al. 1999). The disparity in their emission properties is therefore surprising. The radio emission upper limit (Coe et al. 1994) for 1E 2259+586 implies a radio luminosity at 1400 MHz of $`<`$0.8 mJy kpc<sup>2</sup>. This limit is comparable to the lowest values known for the radio pulsar population. That the radio pulse may be unobservable because of beaming cannot of course be ruled out. The radio-loud/radio-quiet boundary line displayed in Figure 1 is more illustrative than quantitative. However, the apparently normal radio emission from PSRs J1119$``$6127 and J1814$``$1744, and the absence of radio emission from AXP 1E 2259+586, suggests that it will be difficult to delineate any such boundary without fine model-tuning. Furthermore, Pivovaroff et al. (2000) show, from archival data, that PSR J1814$``$1744 must be significantly less X-ray luminous than 1E 2259+586. The similar spin parameters for these two stars, and in turn the common features between 1E 2259+586 and the other AXPs and SGRs, imply that very high inferred magnetic field strengths cannot be the primary factor governing whether an isolated neutron star is a magnetar. PSR J1119$``$6127 is notably young. Only three other pulsars having ages under 2 kyr are known: the Crab pulsar ($`\tau =1.3`$ kyr), PSR B1509$``$58 ($`\tau =1.6`$ kyr), and PSR B0540$``$69 ($`\tau =1.7`$ kyr). The age of a pulsar is given by $`\tau =[1(P_0/P)^{n1}](P/(n1)\dot{P})P/2\dot{P}`$, where $`P_0`$ is the spin period at birth (generally assumed to be much smaller than the current spin period) and $`n`$ is the “braking index,” defined via the relation for the spin evolution $`\dot{\nu }\nu ^n`$, where $`\nu 1/P`$. In the standard oblique rotating vacuum dipole model, $`n\nu \ddot{\nu }/(\dot{\nu })^2=3`$. For PSR J1119$``$6127, the parameters listed in Table 1 imply $`n=3.0\pm 0.1`$. This is the first measured braking index for a pulsar that is consistent with the usually assumed $`n=3`$. ## REFERENCES Baring, M. G. & Harding, A. K. 1998, ApJ, 507, 55 Baykal, A., Swank, J. H., Strohmayer, T. & Stark, M. J. 1998, A&A, 336, 173 Camilo, F., Kaspi, V. M., Lyne, A. G., Manchester, R. N., Bell, F. J., D’Amico, N., McKay, N. P. F., Crawford, F. 2000, submitted Coe, M. J., Jones, L. R. & Lehto, H. 1994, MNRAS, 270, 178 Duncan, R. C. & Thompson, C. 1992, ApJ, 392, L9 Fahlman, G. G. & Gregory, P. C. 1981, Nature, 293, 202 Heyl, J. S. & Hernquist, L. 1997, ApJ, 489, L67 Kaspi, V. M., Chakrabarty, D. & Steinberger, J. S. 1999, ApJ, 525, L33 Kouveliotou, C. e. a. 1998, Nature, 393, 235 Kulkarni, S. R. & Frail, D. A. 1993, Nature, 365, 33 Lyne, A. G., Camilo, F., Manchester, R. N., Bell, J. F., Kaspi, V. M., D’Amico, N., McKay, N. P. F., Crawford, F., Morris, D. J., Sheppard, D. C., Stairs, I. H. 2000, MNRAS, in press Mereghetti, S. & Stella, L. 1995, ApJ, 442, L17 Pivovaroff, M., Kaspi, V. M. & Camilo, F. 2000, ApJ, submitted Thompson, C. & Duncan, R. C. 1993, ApJ, 408, 194 Vasisht, G. & Gotthelf, E. V. 1997, ApJ, 486, L129
no-problem/9912/astro-ph9912394.html
ar5iv
text
# EXIST: A High Sensitivity Hard X-ray Imaging Sky Survey Mission for ISS ## I Introduction The full sky has not been surveyed in space (imaging) and time (variability) at hard x-ray energies. Yet the hard x-ray (HX) band, defined here as 10-600 keV, is key to some of the most fundamental phenomena and objects in astrophysics: the nature and ubiquity of active galactic nuclei (AGN), most of which are likely to be heavily obscured; the nature and number of black holes; the central engines in gamma-ray bursts (GRBs) and the study of GRBs as probes of massive star formation in the early universe; and the temporal measurement of extremes: from kHz QPOs to SGRs for neutron stars, and microquasars to Blazars for black holes. A concept study was conducted for the Energetic X-ray Imaging Survey Telescope (EXIST) as one of the New Mission Concepts selected in 1994 (Grindlay et al 1995). However the rapid pace of discovery in the HX domain in the past $``$2 years, coupled with the promise of a likely 2-10 keV imaging sky survey ABRIXAS2 (see http://www.aip.de/cgi-bin/w3-msql/groups/xray/abrixas/index.html) in c.2002-2004 and the recent selection of Swift (see http://swift.gsfc.nasa.gov/) which will include a $``$10-100 keV partial sky survey (to $``$1 mCrab) in c.2003-2006, have prompted a much more ambitious plan. A dedicated HX survey mission is needed with full sky coverage each orbit and $``$0.05 mCrab all-sky sensitivity in the 10-100 keV band (comparable to ABRIXAS2) and extending into the 100-600 keV band with $``$0.5 mCrab sensitivity. Such a mission would require very large total detector area and large telescope field of view. These needs could be met very effectively by a very large coded aperture telescope array fixed (zenith pointing) on the International Space Station (ISS), and so EXIST-ISS was recommended by the NASA Gamma-Ray Program Working Group (GRAPWG) as a high priority mission for the coming decade. This mission concept has now been included in the NASA Strategic Plan formulated in Galveston as a post-2007 candidate mission. In this paper we summarize the Science Goals and briefly present the Mission Concept of EXIST-ISS. Details will be presented in forthcoming papers, and are partially available on the EXIST website (http://hea-www.harvard.edu/EXIST/EXIST.html). ## II Science Goals and Objectives EXIST would pursue two key scientific goals: a very deep HX imaging and spectral survey, and a very sensitive HX all-sky variability survey and GRB spectroscopy mission. These Survey (S) and Variability (V) goals can be achieved by carrying out several primary objectives: S1: Sky survey for obscured AGN & accretion history of universe It is becoming increasingly clear that most of the accretion luminosity of the universe is due to obscured AGN, and that these objects are very likely the dominant sources for the cosmic x-ray (and HX) diffuse background (e.g. Fabian 1999). No sky survey has yet been carried out to measure the distribution of these objects in luminosity, redshift, and broad-band spectra in the HX band where, as is becoming increasingly clear from BeppoSAX (e.g. Vignati et al 1999), they are brightest. EXIST would detect at least 3000 Seyfert 2s and conduct a sensitive search for Type 2 QSOs. Spectra and variability would be measured, and detailed followup could then be carried out with the narrow-field focussing HX telescope, HXT, on Constellation-X (Harrison et al 1999) as well as IR studies. S2: Black hole accretion on all scales The study of black holes, from x-ray binaries to AGN, in the HX band allows their ubiquitous Comptonizing coronae to be measured. The relative contributions of non-thermal jets at high $`\dot{m}`$ requires broad band coverage to $`>`$$``$ 511 keV, as does the transition to ADAFs at lower $`\dot{m}`$ values. HX spectral variations vs. broad-band flux can test the underlying similarities in accretion onto BHs in binaries vs. AGN. S3: Stellar black hole content of Galaxy X-ray novae (XN) appear to be predominantly BH systems, so their unbiased detection and sub-arcmin locations, which allow optical/IR identifications, can provide a direct measure of the BH binary content (and XN recurrence time) of the Galaxy. XN containing neutron stars can be isolated by their usual bursting activity (thermonuclear flashes), and since they may solve the birth rate problem for millisecond pulsars (Yi and Grindlay 1998), their statistics must be established. A deep HX survey of the galactic plane can also measure the population of galactic BHs not in binaries, since they could be detected as highly cutoff hard sources projected onto giant molecular clouds. Compared to ISM accretion onto isolated NSs, for which a few candidates have been found, BHs should be much more readily detectable due to their intrinsically harder spectra and (much) lower expected space velocities, V, and larger mass M (Bondi accretion depending on M<sup>2</sup>/V<sup>3</sup>). S4: Galactic survey for obscured SNR: SN rate in Galaxy Type II SNe are expected to disperse $``$10<sup>-4</sup> $`M_{}`$of <sup>44</sup>Ti, with the total a sensitive probe of the mass cut and NS formation. With a $``$87y mean-life for decay into <sup>44</sup>Sc which produces narrow lines at 68 and 78 keV, obscured SNe can be detected throughout the entire Galaxy for $`>`$$``$ 300y given the $``$10<sup>-6</sup> photons cm<sup>-2</sup> s<sup>-1</sup> line sensitivity and $`<`$$``$ 2 keV energy resolution (at 70 keV) possible for EXIST. Thus the likely detection of Cas A (Iyudin et al 1994) can be extended to more distant but similarly (or greater) obscured SN to constrain the SN rate in the Galaxy. The all-sky imaging of EXIST would extend the central-radian galactic survey planned for INTEGRAL to the entire galaxy. V1: Gamma-ray bursts at the limit: SFR at z $`>`$$``$ 10 Since at least the “long” GRBs located with BeppoSAX are at cosmological redshifts, and have apparent luminosities spanning at least a factor of 100, it is clear that even the apparently lower luminosity GRBs currently detected by BeppoSAX could be detected with BATSE out to z $``$4 and that the factor $``$5 increase in sensitivity with Swift will push this back to z $``$5-15 (Lamb and Reichart 1999). The additional factor of $``$4 increase in sensitivity for EXIST would allow GRB detection and sub-arcmin locations for z $`>`$$``$ 15-20 and thus allow the likely epoch of Pop III star formation to be probed if indeed GRBs are associated with collapsars (e.g. Woosley 1993) produced by the collapse of massive stars. The high throughput and spectral resolution for EXIST would enable high time resolution spectra which can test internal shock models for GRBs. V2: Soft Gamma-ray Repeaters: population in Galaxy and Local Group Only 3 SGR sources are known in the Galaxy and 1 in the LMC. Since a typical $``$0.1sec SGR burst spike can be imaged (5$`\sigma `$) by EXIST for a peak flux of $``$200 mCrab in the 10-30 keV band, the typical bursts from the newly discovered SGR1627-41 (Woods et al 1999) with peak flux $``$2 $`\times `$ 10<sup>-6</sup> erg cm<sup>-2</sup> s<sup>-1</sup> would be detected out to $``$200 kpc. Hence the brightest “normal” SGR bursts are detectable out to $``$3 Mpc and the rare giant outbursts (e.g. March 5, 1979 event) out to $``$40 Mpc. Thus the population and physics of SGRs, and thus their association with magnetars and young SNR, can be studied throughout the Local Group and the rare super-outbursts beyond Virgo. V3: HX blazar alert and spectra: measuring diffuse IR background The cosmic IR background (CIRB) over $``$1-100$`\mu `$ is poorly measured (if at all) and yet can constrain galaxy formation and the luminosity evolution of the universe (complementing S1 above). As reviewed by Catanese and Weekes (1999), observing spectral breaks (from $`\gamma \gamma `$ absorption) for blazars in the band $``$0.01-100 TeV can measure the CIRB out to z $``$1 if the intrinsic spectrum is known. Since the $`\gamma `$-ray spectra of the detected (low z) blazars are well described by synchrotron-self Compton (SSC) models, for which the hard x-ray ($``$100keV) synchrotron peak is scattered to the TeV range, the HX spectra can provide both the required underlying spectra and time-dependent light curves for all objects (variable!) to be observed with GLAST and high-sensitivity ground-based TeV telescopes (e.g. VERITAS). V4: Accretion torques and X-ray pulsars The success of BATSE as a HX monitor of bright accreting pulsars in the Galaxy (cf. Bildsten et al 1997), in which spin histories and accretion torques were derived for a significant sample, can be greatly extended with EXIST: the very much larger reservoir of Be systems can be explored, and wind vs. disk-fed accretion studied in detail. The wide-field HX imaging and monitoring capability will also allow a new survey for pulsars and AXPs in highly obscured regions of the disk, complementing S4 above. V5: QPOs and accretion disk coronae The rms variability generally and QPO phenomena appear more pronounced above $``$10keV for x-ray binaries containing both BH and NS accretors, suggesting the Comptonizing corona is directly involved. Thus QPOs and HX spectral variations can allow study of the poorly-understood accretion disk coronae, with extension to the AGN case. Although the wide-field increases backgrounds, and thus effective modulation, the very large area ($`>`$$``$ 1m<sup>2</sup>) of HX imaging area on any given source means that multiple $``$100 mCrab LMXBs could be simultaneously measured for QPOs with 10% rms amplitude in the poorly explored 10-30 keV band. ## III EXIST-ISS Mission Concept To achieve the desired $``$0.05 mCrab sensitivity full sky up to 100 keV (and beyond) requires a very large area array of wide-field coded aperture (or other modulation) telescopes. The very small field of view ($``$10$`^{}`$ ) of true focussing (e.g. multi-layer) HX telescopes precludes their use for all sky imaging and monitoring surveys. EXIST-ISS would take the coded aperture concept to a practical limit, with 8 telescopes each with 1m<sup>2</sup> in effective detector area and 40 $`\times `$ 40 in field of view (FOV). The individual FOVs are offset by 20 for a combined FOV of 160 $`\times `$ 40 , or $``$2sr. By orienting the 160 axis perpendicular to the orbit vector, the full sky can be imaged each orbit if the telescope array is fixed-pointed at the local zenith. This gravity-gradient type orientation, and the large spatial area of the telescope array, are ideally matched for the ISS, which provides a long mounting structure (main truss) conveniently oriented perpendicular to the motion, as depicted on the EXIST website. The sensitivity would yield $``$10<sup>4</sup> AGNs full sky, thus setting a confusion limit resolution requirement ($``$1/40 “beam”) of $``$5$`^{}`$ . With this coded mask pixel size, high energy occulting masks (5mm, W) can be constructed with 2.5mm pixel size for minimal collimation. The mask shadow is then recorded by tiled arrays of CdZnTe (CZT) detectors with effective pixel sizes of $``$1.3mm, yielding a compact (1.3m) mask-detector spacing. The CZT detectors would likely be 20mm square $`\times `$ 5mm thick (for $`>`$$``$ 20% efficiency at 500 keV) and read out by flip-chip bonded ASICs (e.g. Bloser et al 1999, Harrison et al 1999). The 8-telescope array is continuously scanning (sources on the orbital plane drift across the 40 FOV in 10min; correspondingly longer exposures/orbit near the poles), with each photon time-tagged and aspect corrected ($``$10$`^{\prime \prime }`$) so that ISS pointing errors or flexure are inconsequential over the large FOV. Source positions are centroided to $`<`$$``$ 1$`^{}`$ for $`>`$$``$ 5$`\sigma `$ detections. The resulting sky coverage is remarkably uniform with $`<`$$``$ 25% variation in exposure full sky over the $``$2mo precession period of the ISS orbit. More details of the current mission concept are given in Grindlay et al (2000), and will be further developed in the implementation study being conducted by the EXIST Science Working Group (EXSWG).
no-problem/9912/gr-qc9912102.html
ar5iv
text
# Using binary stars to bound the mass of the graviton ## I INTRODUCTION The advent of large scale laser interferometer gravitational wave detectors promises to lead to the direct detection of gravitational radiation, which will create a new observational field of science: gravitational wave astronomy. The introduction of gravitational waves into the retinue of astronomical observations will yield important new information about the dynamics of astrophysical systems, and will also provide an excellent opportunity to conduct new tests of gravity. Proposed space-based laser interferometers, such as the LISA (Laser Interferometer Space Antenna) and OMEGA (Orbiting Medium Explorer for Gravitational Astrophysics) observatories, will be particularly well poised to begin astrophysical studies since there are known sources of gravitational radiation which will be easily visible to these instruments, namely interacting binary white dwarf (IBWD) star systems. The IBWD sources are particularly appealing targets because they are good candidates for simultaneous optical and gravitational wave observations. Many of the IBWDs which are known to exist are being studied and monitored by the Center for Backyard Astrophysics (CBA)<sup>*</sup><sup>*</sup>*The CBA is a network of amateur astronomers equipped with CCD photometry equipment who monitor variable stars. The network is managed by professional astronomers at Columbia University., and are expected to be strong sources of monochromatic gravitational waves which should be easily visible to an instrument such as LISA with only a few minutes of signal integration. Simultaneous optical and gravitational wave observations will be useful in refining the current physical models used to describe these systems, and for testing relativistic theories of gravity in the radiative regime by comparing the propagation speeds of electromagnetic and gravitational wave signals. This paper examines how the comparison of the phase of the orbitally modulated electromagnetic signal (the light curve) and a gravitational wave signal from an IBWD star system can be used to bound the mass of the graviton. If the mass of the graviton is assumed to be known by other measurements, then the observations may be used to determine the properties of the binary star system being monitored. Current conservative bounds on the graviton mass come from looking for violations of Newtonian gravity in surveys of planetary motions in the solar system. If gravity were described by a massive field, the Newtonian potential would have Yukawa modifications of the form $$V(r)=\frac{M}{r}\mathrm{exp}(r/\lambda _g)$$ (1) where $`M`$ is the mass of the source of the potential, and $`\lambda _g=h/m_g`$ is the Compton wavelength of the graviton, where $`m_g`$ is the graviton mass. The current best bound on the graviton mass from planetary motion surveys is obtained by using Kepler’s third law to compare the orbits of Earth and Mars, yielding $`\lambda _g>2.8\times 10^{12}`$ km ($`m_g<4.4\times 10^{22}`$ eV) . Another bound on the graviton mass can be established by considering the motions of galaxies in bound clusters , yielding $`\lambda _g>6\times 10^{19}`$ km ($`m_g<2\times 10^{29}`$ eV). This bound, while stronger than solar system estimates, is considerably less robust, due to uncertainty about the matter content of the Universe on large scales (e.g., the amount and nature of dark matter is widely debated, and uncertain at best). Recent work by Will has suggested that the mass of the graviton could be bounded using gravitational wave observations. If the graviton is a massive particle, then the speed of propagation of a gravitational wave will depend on its frequency. As binary systems evolve, they will slowly spiral together due to the emission of gravitational radiation. Over the course of time, the frequency of the binary orbit rises, ramping up rapidly in the late stages of the evolution, just prior to coalescence. Laser interferometer gravitational wave detectors should be able to track the binary system’s evolution, obtaining the detailed time-dependent waveform using the matched filtering techniques required for data analysis in these detectors. Space-based detectors such as LISA will be able to observe the coalescence of massive ($`10^5`$ to $`10^7\text{M}_{}`$) binary black holes, as well as the gravitational wave emission from compact binary star systems which are far from coalescence (e.g., interacting binary white dwarfs). Ground-based detectors such as LIGO will be able to detect the merger of smaller black hole binaries ($`10\text{M}_{}`$), as well as the coalescence of compact binary stars (e.g., neutron star/neutron star binaries). If the graviton is a massive particle, then the observed signal will not perfectly match theoretical templates computed using general relativity theory, in which the graviton is massless; a massive graviton would cause dispersion in the gravitational waves. By using matched filtering of inspiral waveforms, this dispersion could be bounded, thereby bounding the mass of the graviton. Will finds that LIGO could bound the graviton mass at $`\lambda _g>6.0\times 10^{12}`$ km ($`m_g<2.1\times 10^{22}`$ eV) by observation of the inspiral of two $`10\text{M}_{}`$ black holes. A space-based interferometer such as LISA, observing the inspiral of two $`10^7\text{M}_{}`$ black holes could bound the graviton mass at $`\lambda _g>6.9\times 10^{16}`$ km ($`m_g<1.8\times 10^{26}`$ eV). If the graviton is massive, then these numbers represent the minimum masses detectable by such observations. The analysis in this paper shows that LISA observations of known IBWD sources could yield a bound as strong as $`\lambda _g>5\times 10^{15}`$ km ($`m_g<2\times 10^{25}`$ eV), considerably stronger than present solar system based bounds. The IBWD bound also has the advantage of not depending on the complicated details of black hole coalescences. Section II reviews what is known about the interacting binary white dwarfs, in particular the helium cataclysmic variable (HeCV) systems and their archetype, the binary AM CVn (AM Canum Venaticorum). Section III reviews the basic notions associated with (possibly) massive photons and gravitons. Sections IV$``$V propose a new experiment to measure the graviton mass using IBWD observations, and an expression for the mass is derived. In Section VI, the sensitivity predicted for LISA is used to estimate how precise a bound could be placed on the graviton mass from HeCV observations. Section VII summarizes the results, and also suggests how correlation of phase measurements might be used to measure other astrophysical parameters in the binary system, such as the accretion disk radius. Throughout this paper, geometric units with $`G=c=1`$ are employed, unless otherwise noted. ## II Interacting Binary White Dwarf Systems Estimates suggest the Galaxy is populated by $`10^7`$ close binary star systems . The sheer numbers of these systems is likely to have profound consequences for space-based gravitational wave observatories. The combined gravitational waves from these binaries will produce a stochastic background which rises well above the low frequency detection sensitivity of LISA . Particularly strong (e.g., nearby) binary systems will rise above this background and be observable by a spaceborne observatory. One class of such sources are the helium cataclysmic variable (HeCV) stars. The properties of the six nearest known HeCVs are shown in Table I . The predicted stochastic gravitational wave background due to short period binary stars, as calculated by Hils and Bender and the predicted signals of the six nearest HeCVs are plotted in Figure 1, along with the predicted sensitivity curve of LISA. If present models of the spatial density of close binaries in the Galaxy are correct, roughly 5000 of these sources should be individually detectable by a space-based laser interferometer such as LISA. Currently the best models for HeCVs describe a star system where the secondary star (the lower mass companion in the binary system, usually a degenerate helium dwarf star) has expanded to fill its Roche lobe, and the primary star (the larger mass, compact white dwarf) lies at the core of an accretion disk. Matter overflows from the secondary Roche lobe and streams onto the accretion disk, creating a hot spot which emits a strong electromagnetic signal. There are several mechanisms whereby the light curve of an HeCV could be modulated as seen from the perspective of observers on Earth. The simplest model is for systems whose orbital plane is close to the line of sight to the Earth, so that the stars periodically eclipse, partially or wholly. The eclipse phase could dim the stellar components, part of the primary emission regions of the accretion disk, or the Roche lobe. Another possible mechanism for variation in the light curve is associated with the emission from the secondary star, which has expanded to fill its Roche lobe. The star will appear brighter when the largest surface area is presented to the observer along the projected line of sight. This will occur twice in each orbit, when the line of sight is perpendicular to the line of centers. Signatures in the light curve due to projected area effects are called “ellipsoidal variations,” and are most easily observed in infrared wavelengths . Yet another possible mechanism is currently favored to explain the source of variation in the light curve of the archetype system for HeCVs, AM CVn . In this model the hot spot on the accretion disk radiates approximately radially outward from the disk. As the binary orbits, this hot spot alternately turns towards and away from distant observers, leading to a modulation of the light curve (a so-called “flashlight” mechanism). A detailed theoretical model of AM CVn has been constructed, describing a variety of signals which are present in the photometric data . This model suggests that AM CVn is a member of a class of variable stars that have periodic features in the light curve known as “superhumps” . The model explains the superhump feature as being caused by the existence of an eccentric precessing accretion disk, with a precession period which is slightly longer than the orbital period of the binary. Knowing the superhump period, $`P_{sh}`$, and the precessional period of the accretion disk (apsidal advance), $`P_{aa}`$, the model predicts the orbital period will be given by $$P_{orb}^1=P_{sh}^1+P_{aa}^1.$$ (2) Photometry of AM CVn shows the existence of a superhump signature at $`P_{sh}=1051.2`$ s, and the period of the accretion disk precession at $`P_{aa}=13.38`$ hr. Using Eq. (2) this model predicts a binary orbital period of $`P_{orb}=1028.77\pm 0.18`$ s for AM CVn . Photometric observations by the CBA have recently confirmed an orbital period of $`P_{orb}=1028.7325\pm 0.0004`$ s . ## III Massive Gravitons and Photons To constrain the mass of the graviton by comparing the propagation speed of gravitational and electromagnetic waves, one must consider how the speed of gravitational waves (and electromagnetic waves) is related to the mass of the graviton (and the possible mass of the photon). The current bound on the photon mass is $`m_\gamma <2\times 10^{16}`$ eV , which is much larger than the current bounds on the graviton mass (cf., the solar system bound on the graviton mass is $`m_g<4.4\times 10^{22}`$ eV). Is it then justifiable to treat the photon as a massless particle, while at the same time treating the graviton as a massive particle? The resolution to this question can be understood by examining the partition of energy between the rest mass and kinetic energy of a particle being received from a distant binary. From the relativistic energy, $`E^2=p^2+m^2`$, one may write the velocity of any particle as $$v^2=1\frac{m^2}{E^2}.$$ (3) If $`mE`$, then Eq. (3) implies $$\epsilon =1v\frac{1}{2}\frac{m^2}{E^2},$$ (4) where $`\epsilon `$ parameterizes the difference between the velocity of the particle and $`c`$. For optical photons ($`\lambda 500`$ nm) received from a binary star system, the characteristic energy is $`E_\gamma 2.5`$ eV. For this energy, $`\epsilon _\gamma 3\times 10^{33}`$. Similar considerations may be applied to the gravitons received from the same binary systems. In this case, the frequency of the gravitational waves is $`f10^3`$ Hz, giving a characteristic energy for a single graviton of $`E_g4\times 10^{18}`$ eV. Using the solar system bound on the graviton mass, $`m_g<4.4\times 10^{22}`$ eV, yields $`\epsilon _g1\times 10^8`$. For the current bounds on the photon and graviton masses, $`\epsilon _\gamma \epsilon _g`$. If the bound on the mass of the graviton is not drastically improved (e.g., decreasing the bound on $`m_g`$ by $`12`$ orders of magnitude, such that $`\epsilon _g\epsilon _\gamma `$), then the effect of a non-zero mass will be much more significant for gravitons than photons in our analysis. This justifies the treatment of the photons as massless particles and the gravitons as massive particles in this paper. Since the gravitational waves emitted by an HeCV binary star system are essentially monochromatic, they will have a single particular velocity, $$v_g1\frac{1}{2}\left(\frac{m_g}{E}\right)^2.$$ (5) Writing the energy in Eq. (5) as $`E=\mathrm{}\omega `$, and identifying the Compton wavelength of the graviton as $`\lambda _g=h/m_g`$ this becomes $$v_g1\frac{1}{2}\left(\frac{2\pi }{\lambda _g\omega }\right)^2.$$ (6) ## IV Correlation of Electromagnetic and Gravitational Observations to Measure the Graviton Mass As noted in Section II, it is expected that the Galaxy harbors a large population of interacting binary white dwarf stars. The light curves for several of these systems are already known, obtained from ground-based optical photometry. Because these systems are expected to be observable in the gravitational wave spectrum (in addition to the optical spectrum), they present an excellent opportunity to directly compare the propagation speed of electromagnetic and gravitational waves. Consider the schematic diagram shown in Figure 2. The phase fronts of the light curve modulation are represented in the top half of the diagram. The binary star system will also emit gravitational radiation which could be monitored by Earth-bound observers as well. The phase fronts of the gravitational wave signal are represented in the lower half of the diagramAssuming a circular orbit, the frequency of the gravitational radiation is actually twice the orbital frequency of the binary. For clarity, only half the gravitational phase fronts have been drawn in the diagram.. Suppose the two signals are emitted in phase at the source as shown. If the graviton is a massive particle, then the gravitational waves propagate at a speed $`v_g<1`$, and the gravitational phase fronts will lag behind the light curve phase fronts when the signals arrive at Earth, as shown. By measuring the lag between the phase fronts, the mass of the graviton can be measured or bounded. To determine the lag between the two signals, the phase of each signal must be measured. Consider a binary with orbital frequency $`\omega _o`$ at a distance $`D`$ from Earth. Assuming the photon to be massless ($`v=c=1`$), the observed phase of the light curve will be $$\varphi _{em}=\omega _oD+A,$$ (7) where the term $`A`$ represents a variety of effects (discussed in Section V) which could create phase delays between the electromagnetic and gravitational signals that are being monitored. In contrast, the gravitational wave signal, traveling at $`v_g<1`$ will arrive at Earth with a phase of $$\varphi _{gw}=2\omega _o\frac{D}{v_g}.$$ (8) The phase lag, $`\mathrm{\Delta }`$, between the light curve and gravitational wave signals is constructed from these two phases: $`\mathrm{\Delta }`$ $`=`$ $`{\displaystyle \frac{1}{2}}\varphi _{gw}\varphi _{em}`$ (9) $`=`$ $`\omega _oD\left({\displaystyle \frac{1}{v_g}}1+\alpha \right),`$ (10) where $`\alpha =A/(\omega _oD)`$. The factor of $`1/2`$ insures that the phase subtraction is done between two signals with the same frequency. It is convenient to define the fractional change in the phase as $$ϵ=\frac{\mathrm{\Delta }}{\omega _oD}=\left(\frac{1}{v_g}1+\alpha \right).$$ (11) Taking the definition of $`v_g`$ from Eq. (6) and substituting into Eq. (11), the Compton wavelength of the graviton as a function of the fractional phase lag is found to be $$\lambda _g=\frac{\pi }{\omega _o}\sqrt{\frac{1}{2}\left(1+\frac{1}{ϵ\alpha }\right)}.$$ (12) An obvious question to ask about this analysis is how to determine whether or not the phase difference between the two signals is greater than a single cycle, and hence undetectably large (e.g., $`\mathrm{\Delta }=2n\pi +\xi `$, where $`\xi `$ is a small quantity). One can eliminate this concern because strong bounds on the graviton mass already exist. The largest observable phase shift which is consistent with current bounds can be computed by simply evaluating Eq. (12) with $`\lambda _g`$ equal to the bound of interest. For example, the bound on the graviton mass given by solar system constraints, applied to the AM CVn system, yields a maximum fractional phase change of $`ϵ=1.5\times 10^9`$. Since AM CVn lies at a distance of $`D=101`$ pc, this value of $`ϵ`$ indicates a maximum phase difference of $`\mathrm{\Delta }=9.6\times 10^2`$ for that system (we have let $`\alpha 0`$ here for convenience; if the measured phase difference were larger than this value, it would indicate that $`\alpha 0`$). ## V Phase Delays In order to evaluate Eq. (12), one must not only measure the phase lag between the two signals, but an estimate must be made for the value of $`\alpha `$. The parameter $`\alpha `$ can be written as the sum of two primary sources of delay between the gravitational and electromagnetic signal phases: $$\alpha =\alpha _{}+\alpha _{\mathrm{path}},$$ (13) where $`\alpha _{\mathrm{path}}`$ is a phase lag associated with the wave’s propagation from the binary to the observer at the Earth and $`\alpha _{}`$ is a phase lag which depends on the specific astrophysical nature of the binary star system. In principle, $`\alpha _{\mathrm{path}}`$ will be nonzero because the line of sight to the binary is an imperfect vacuum, with non-unit index of refraction. The variations in index of refraction over the path will cause a lag in the electromagnetic signal. The dominant source of this lag will be caused by propagation of the signal through the Earth’s atmosphere. A simple estimate of the value of $`\alpha _{\mathrm{path}}`$ can be made by computing the electromagnetic phase delay due to propagation through a modeled exponential atmosphere, with a density profile $$\rho (r)=\rho _o\mathrm{exp}(r/h_s),$$ (14) where $`h_s`$ is the scale height of the atmosphere. If the index of refraction, $`n`$, is assumed to vary linearly with the density, then $$n(r)=1+\eta \mathrm{exp}(r/h_s),$$ (15) where $`\eta =(n_{atm}n_{vac})=(n_{atm}1)`$ is the difference in index of refraction between the atmosphere and vacuum. The index of refraction is related to the signal’s propagation speed $`v`$ by $$v=\frac{dr}{dt}=\frac{1}{n(r)}.$$ (16) Eq. (16) can be integrated using Eq. (15) to obtain $$_0^{t_{transit}}𝑑t=_0^{r_o}𝑑r\left[1+\eta \mathrm{exp}(r/h_s)\right],$$ (17) where $`t_{transit}`$ is the time it takes a photon to transit the atmosphere, and $`r_o`$ is the height at which the effects of the atmosphere become negligible. Completing the integration yields $$t_{transit}=r_o+h_s\eta (1e^{r_o/h_s}).$$ (18) In order to compute a phase delay, one is interested in the time by which the photons are delayed by the atmosphere, which is $`t_{delay}`$ $`=`$ $`t_{transit}t_{vacuum}`$ (19) $`=`$ $`t_{transit}r_o,`$ (20) where $`t_{vacuum}`$ is the time it would take a photon to travel the same distance (i.e., the atmospheric depth) in vacuum. The phase delay introduced by this effect is obtained by multiplying the delay time by the frequency of the signal being observed, $$\varphi _{delay}=\omega _ot_{delay}.$$ (21) The parameter $`\alpha _{path}`$ is obtained from the phase delay by dividing by the reference phase, $`\omega _oD`$ (as in Eq. (10)), giving $$\alpha _{path}=\frac{\varphi _{delay}}{\omega _oD}=\frac{t_{delay}}{D}.$$ (22) It is now possible to numerically estimate the delay introduced by the atmosphere. Assuming the Earth’s atmosphere to be in hydrostatic equilibrium gives a scale height $`h_s=8500`$ m. A typical value for the difference between the atmosphere’s index of refraction and unity, evaluated at sea level, is $`2.8\times 10^4`$. Taking the atmospheric depth to be $`r_o=10h_s`$, Eq. (20) gives $`t_{delay}=8.0\times 10^9`$ s. For the prototypical source AM CVn, at a distance $`D=101`$ pc, Eq. (22) yields $`\alpha _{\mathrm{path}}=7.7\times 10^{19}`$. This is much smaller than any possible phase measurement, either electromagnetic or gravitational, and so we will henceforth ignore this source of uncertainty. The parameter $`\alpha _{}`$ is a measure of the initial phase difference at the source between the gravitational wave and electromagnetic signals. It indicates the relative phase difference between the peaks in the light curve of the binary, and the peaks in the quadrupole gravitational radiation pattern. Determining the value of $`\alpha _{}`$ requires knowledge of the position of the stars in the binary system when the electromagnetic signal peaks. The quadrupole gravitational radiation pattern will peak along the line of masses in the binary system, and also $`180^{}`$ away from the line of masses (since the frequency of the gravitational radiation is $`\omega =2\omega _o`$). If the primary variation in the binary light curve is associated with the transit of the line of masses across the observer’s line of sight (e.g., the system is an eclipsing binary), then it is straightforward to assign $`\alpha _{}=0`$, indicating no initial phase delay between the gravitational wave signal and the binary’s light curve. For more complicated systems such as AM CVn, the light curve variation reflects the orbital motion of the hot spot on the edge of the accretion disk where the matter stream from the secondary Roche lobe overflow strikes the disk. Studies of Roche lobe overflow show that the transferred material remains in a coherent matter stream which spirals in towards the primary star. In this sort of system, the location of the hot spot will determine the value of $`\alpha _{}`$, which will describe the amount by which the hot spot leads the line of masses, as shown in Figure 3. Estimates of the size of the primary accretion disk in HeCV type systems suggest that disk radii will be around $`75\%`$ of the primary Roche radius , but this estimate is only certain to within about $`10\%`$. This uncertainty makes it virtually impossible to estimate $`\alpha _{}`$ adequately for use in Eq. (12) from present observational data . Future observations of HeCV systems, either from advanced ground-based instruments such as the Keck Interferometer, or from space-based instruments such as the Space Interferometry Mission and the Terrestrial Planet Finder could allow direct measurement of $`\alpha _{}`$ by optically imaging the detailed structure of close binary systems. For cases when $`\alpha _{}`$ cannot be accurately determined, the dependence of the Compton wavelength of the graviton on this parameter (or on $`\alpha _{\mathrm{path}}`$) can be eliminated by subtraction of two observations of the source. Consider the situation shown in Figure 4, where the gravitational and electromagnetic signals are monitored when the Earth lies on one side of its orbit, and again six months later, when it lies on the other side of its orbit. When the Earth is in position $`1`$, the phase difference between the electromagnetic and gravitational wave signal can be written $$\mathrm{\Delta }_1=\frac{\omega _oD}{v_g}\omega _oD+A.$$ (23) Similarly, when the Earth is in position $`2`$, the phase difference may be written $$\mathrm{\Delta }_2=\frac{\omega _o(D+L)}{v_g}\omega _o(D+L)+A,$$ (24) where $`L2\mathrm{A}\mathrm{U}`$ is the path length difference for the time of flight between the two measurement. Subtraction eliminates the unknown quantity $`A`$, yielding $$\mathrm{\Delta }_2\mathrm{\Delta }_1=\omega _oL\left(\frac{1}{v_g}1\right).$$ (25) Defining the fractional change in phase from this quantity gives $$ϵ=\frac{\mathrm{\Delta }_2\mathrm{\Delta }_1}{\omega _oL}=\frac{1}{v_g}1,$$ (26) which in terms of the Compton wavelength becomes $$\lambda _g=\frac{\pi }{\omega _o}\sqrt{\frac{1}{2}\left(1+\frac{1}{ϵ}\right)}.$$ (27) The unknown parameter, $`\alpha `$, has been eliminated from the expression for $`\lambda _g`$, but at the cost using a much shorter characteristic distance, $`LD`$. This approach amounts to measuring the phase lag between the two signals over the time it takes to cross the Earth’s orbit (at most)An improvement of roughly a factor of two may be obtained by combining the Earth’s orbital motion with the proper motion of the binary system relative to the Sun, which will typically be of order a few AU per year, as opposed to the time it takes to propagate over the Earth-source distance, leading to a great loss of precision in the measurement of $`\lambda _g`$. ## VI Obtaining a Mass Bound In order to estimate a bound on the graviton mass, assume a null result for the measurement of the phase difference, $`\mathrm{\Delta }`$, between the two signals. The size of $`\mathrm{\Delta }`$ (and therefore $`ϵ`$) will then be limited only by the uncertainty in the measurements of the phase. Combining the uncertainty of the gravitational phase measurements with the electromagnetic phase measurements in quadrature yields $$\mathrm{\Delta }\delta \mathrm{\Delta }=\sqrt{\delta \varphi _{em}^2+\frac{1}{4}\delta \varphi _{gw}^2},$$ (28) where $`\delta \varphi _{em}`$ and $`\delta \varphi _{gw}`$ are the uncertainties in each of the phases. For observations with a space-based interferometer, the error in phase measurements can be estimated as the ratio between the sampling time and the total integration time. For LISA, the sampling time is expected to be of order $`1`$ s, with total integration times of $`1`$ yr = $`3\times 10^7`$ s, yielding $`\delta \varphi _{gw}=3\times 10^8`$. The CBA reports a $`0.0004`$s uncertainty over the $`1028.7325`$s period of AM CVn, yielding a phase uncertainty of $`\delta \varphi _{em}=4\times 10^7`$. For the case of AM CVn, the value of $`\alpha _{}`$ is still not known, so bounds on the graviton mass must be derived from Eq. (27). As shown in Figure 4, the characteristic distance $`L`$ is simply the path length difference between the two measurements. If the inclination of the binary system to the plane of the Earth’s orbit is $`\beta `$, then the characteristic distance is $`L=2\mathrm{cos}(\beta )\mathrm{AU}`$. For AM CVn, which lies at ecliptic latitude $`\beta =37.4^{}`$, this yields $`L=2.38\times 10^{11}`$ m. Using this value, Eq. (27) gives a bound $$\lambda _g>5\times 10^{14}\text{m}=5\times 10^{11}\text{km},$$ (29) or, in terms of the graviton mass, $`m_g<2\times 10^{21}`$ eV, about a factor of five worse than the present bound based on the motion of Mars. Even this weak bound would be of interest, however, since it is based on the dynamics of the gravitational field, (i.e., gravitational waves, rather than the static Yukawa modifications of the Newtonian potential). If the value of $`\alpha _{}`$ could be determined precisely (e.g., by monitoring ellipsoidal variations the light curve in the infrared, as suggested in Section II, or with future optical interferometer observations), such that the uncertainties $`\delta \alpha _{}10^7`$, then Eq. (12) could be used to bound the graviton mass. The distance to the known IBWD systems is typically of order $`D100`$ pc; combining this with a typical orbital period of $`P1500\mathrm{s}`$ yields a bound $$\lambda _g>1\times 10^{18}\text{m}=1\times 10^{15}\text{km},$$ (30) or $`m_g<1\times 10^{24}`$ eV. This potential bound would be a factor of four hundred more stringent that the present solar system based bound, and would be better than the bounds obtained from inspiraling black holes proposed by Will for all but very large black holes. During the next decade, as we await the launch of LISA, the optical astronomers may well succeed in further reducing the uncertainty in their phase measurements. If the optical signal phase error is reduced in Eq. (28) to the point where the dominant source of error is the gravitational wave phase measurement, then the bound obtained above could be improved by about another factor of five, to $`\lambda _g>5\times 10^{15}\mathrm{km}`$ ($`m_g<2\times 10^{25}\mathrm{eV}`$). ## VII SUMMARY After the initial detection of gravitational waves the challenge will be for the field to evolve into a productive observational science which makes firm contact with astrophysics, complementing the broad base of electromagnetic observations already supporting that field. The experiment proposed here is particularly appealing because it entails observations of known sources by space-based detectors. The existence of IBWDs has been verified (as opposed to more speculative sources, such as binary black hole coalescence events), and such objects are currently under study by observational astronomers. Detailed gravitational wave observations can begin almost as soon as a space-based interferometer such as LISA is online. We have shown that reliable bounds on the mass of the graviton of order $`\lambda _g>1\times 10^{15}`$ km could be obtained through detailed observations of the interacting binary white dwarf star systems such as AM CVn. With the combination of detailed studies of such binary systems by optical interferometers and gravitational wave observations, this could be a very robust bound, several orders of magnitude greater than the current best bounds from solar system observations. If one assumes the graviton to be a massless particle, as predicted by general relativity, then the same measurements described here can be employed to determine the structure of the binary star system. If the graviton is massless, then any phase difference measured between the gravitational wave and electromagnetic signal must be due to effects in the binary system. Setting $`v_g=1`$ in Eq. (11) yields $$ϵ=\frac{\mathrm{\Delta }}{\omega _oD}=\alpha ,$$ (31) showing that the difference in phase is simply an indicator of the value of $`\alpha =\alpha _{\mathrm{path}}+\alpha _{}`$. As was shown in Section VI, the value of $`\alpha _{\mathrm{path}}`$ is expected to be negligible ($`\alpha _{\mathrm{path}}=7.9\times 10^{19}`$ for AM CVn). In the cases where $`\alpha _{\mathrm{path}}`$ can be ignored, the measured phase difference will be a direct measure of the value of $`A`$, which is the amount of phase by which the electromagnetic signal leads the line of masses in the binary system. With good models of the matter stream from the secondary Roche lobe overflow (such as the trajectories shown in Figure 3), a measurement such as this could allow an accurate determination of the accretion disk radius and the refinement of physical models for HeCV type stars solely from gravitational wave observations. ###### Acknowledgements. This work was supported in part by National Science Foundation Grant No. PHY-9734834 and NASA Cooperative Agreement No. NCC5-410.
no-problem/9912/astro-ph9912035.html
ar5iv
text
# An X-Ray Spectroscopic Study of the SMC X-1/Sk 160 System ## 1 INTRODUCTION The evidence that some of the first discovered cosmic X-ray sources were binary star systems included periodic “low-states” which were recognized as eclipses by a companion star. However, it was observed that during these eclipses there was still a residual flux of X-rays. Schreier et al. (1972) observed a residual flux from Cen X-3 during its eclipses and proposed that it might be due to “the slow radiative cooling of the gas surrounding the system which is heated by the pulsating source”. In observations with the GSFC Cosmic X-ray Spectroscopy experiment on board OSO-8, Becker et al. (1978) detected a significant flux of residual X-rays during three eclipses of the massive X-ray binary Vela X-1 and argued – on the grounds that the residual eclipse X-ray had, within limits, the same spectrum as the uneclipsed flux – that this eclipse radiation was most likely due to X-ray scattering around the primary star. Observations of the P-Cygni profiles of ultraviolet lines in massive stars showed that strong winds are a ubiquitous feature of massive stars (Morton, 1967). These winds are driven as the UV radiation from the star transfers its outward momentum to the wind in line transitions (Lucy & Solomon, 1970; Castor et al., 1975). In high-mass X-ray binaries (HMXBs) then, an obvious source of circumstellar material exists to scatter or reprocess the X-rays from the compact object and allow X-rays from these sources to be observed during eclipse. Information on the composition and dynamics of HMXB winds may provide important clues for the understanding of binary evolution (e.g. mass-loss rates). Information on the metal abundances may also inform the study of binary evolution. In other galaxies, such as the Magellanic Clouds, X-ray reprocessing in the winds of HMXB may provide another way to measure the metal abundances in those galaxies and therefore inform the study of the evolution of those galaxies. Properties of HMXB winds have been inferred in studies which interpret the the X-ray spectrum at various phases in terms of the absorption, emission, and scattering by the wind material. Studies of X-ray absorption during eclipse transitions have shown that the atmospheric densities near the surfaces of the massive companions decrease exponentially atmospheres with scale heights $``$1/10 of the stellar radius (Schreier et al., 1972; Sato et al., 1986; Clark et al., 1988). Such exponential regions are not predicted by the theory of Castor et al. (1975) or subsequent refinements (e.g. Pauldrach et al., 1986; Kudritzki et al., 1989) which assume that the wind is spherically symmetric. X-rays photo-ionize the wind, destroying the ions responsible for the radiation driving, and shutting off the wind acceleration, in the region illuminated by the X-ray star. Hydrodynamic simulations have been done to explore the behavior of winds in HMXB under the influrence of X-ray ionization and of X-ray heating of the exposed face of the companion star (Blondin et al., 1990; Blondin, 1994; Blondin & Woo, 1995). Several studies have compared observations with the results of Monte Carlo calculations with model winds. In these calculations, individual photons were tracked though trial wind density distributions where they could scatter from electrons or be absorbed. Fluorescent photons were emitted following inner shell ionization and these photons were tracked through the wind in the same way. Lewis et al. (1992) found that Ginga spectra of Vela X-1 could be reproduced with a density distribution of the form of a radiatively driven wind with an exponential lower region. Clark et al. (1994) attempted to reproduce Ginga spectra of 4U 1538-52 with a similar density distribution but with added components resembling structures in the simulations of Blondin et al. (1990). These authors were able to reproduce the observed spectrum above 4.5 keV. They attributed an excess at lower energies in part to a dust-scattered halo. Woo et al. (1995) found that Ginga spectra of SMC X-1 could be reproduced from the calculated density distribution of Blondin & Woo (1995). Spectra from the moderate-resolution ($`E/\mathrm{\Delta }E15`$ at 1 keV) X-ray CCDs on ASCA — and the prospect of high resolution X-ray spectra from AXAF, XMM, and ASTRO-E — present new opportunities for using X-ray spectra in the study of HMXB winds. In an observation of Vela X-1 with ASCA, approximately 10 emission features emerged in the spectrum of the system viewed during eclipse of the X-ray source when the much more intense, featureless continuum radiation was occulted. Most of these emission features were identified as K$`\alpha `$ emission lines from helium and hydrogen-like ions of astrophysically abundant metals (Nagase et al., 1994). The Monte Carlo spectral calculations described above did not include recombination to highly ionized atoms which is responsible for these lines and so could not have predicted this spectrum. Moderate and high-resolution spectra provide a powerful diagnostic of the conditions in the wind through study of this recombination radiation. Ebisawa et al. (1996) have estimated the size of the ionized regions in the wind in the Cen X-3 system from the magnitude of the recombination lines measured with ASCA and Liedahl & Paerels (1996) have estimated the emission measure of ionized regions in the wind of the Cyg X-3 system from the recombination lines and narrow recombination continua in the ASCA spectrum. Sako et al. (1999) have estimated the differential emission measure of the wind of Vela X-1 from the recombination features. They also developed a method to compute the emission spectrum of recombination from a given matter distribution and have inferred the presence of additional dense matter in the Vela X-1 wind from the presence of fluorescence lines. In this paper, we present a study of the circumstellar matter in the SMC X-1/Sk 160 binary system based on observations of SMC X-1 in and out of eclipse. Using the XSTAR program, we calculate the spectra of reprocessed radiation for a range of values of the ionization parameter ($`\xi L/nr^2`$). We show that the eclipse spectrum of SMC X-1, which is similar in shape to the out-of-eclipse spectrum, resembles the spectra of X-rays reprocessed in material with either low or high ionization parameters but not the spectra expected from material with intermediate ionization parameters which should exhibit strong recombination features. We then describe a method to calculate the spectra of reprocessed X-rays from an arbitrary wind distribution. This method takes account of electron scattering and fluorescence and calculates the reprocessed emisson by summing the diffuse emission and absorbing along the lines of sight to the observer. We use this method to predict the spectrum of X-rays emitted by a wind with the density distribution of Blondin & Woo (1995). Because our calculations include recombination radiation and because the ASCA data is sensitive to recombination spectral features, we obtain a conclusion which contradicts that of Woo et al. (1995). The matter distribution of Blondin & Woo (1995) would produce strong recombination radiation features which are excluded by the observed ASCA eclipse spectrum. In Section 2 we discuss our observations of SMC X-1 with ASCA. In Section 3, we discuss our calculations of the spectra of reprocessed X-rays from homogenous, optically thin gas, and compare it to the ASCA data. In Section 4, we describe our procedure for computing the spectrum of reprocessing from a 3-D distribution of gas and apply it to the density distribution from the hydrodynamic simulation by Blondin & Woo (1995). ## 2 ASCA OBSERVATIONS SMC X-1 has been observed twice by the ASCA observatory: once, while the source was uneclipsed, soon after launch in April of 1993 during the performance verification (PV) phase, and again in October of 1995, while the source was eclipsed. The out-of-eclipse observation, yielded 17,769 seconds of SIS0 data and 17,573 seconds of SIS1 data after screening according to the default criteria described in the ASCA Data Reduction Guide. During the eclipsed observation, SMC X-1 was observed over a period of approximately 94,000 seconds resulting in 32,838 seconds of SIS0 data and 32,239 seconds of SIS1 data after screening. We used the FTOOLS package to extract light curves and spectra from these data and the XSPEC (Arnaud, 1996) program to do the spectral analysis. The source regions of the CCD chips were chosen to be squares centered on the image of SMC X-1 and $`383\mathrm{}`$ on a side. The remainders of the chips were used as the background regions. To derive spectra and light curves, the number of counts in each spectral or time bin in the background region was subtracted (after correcting for the difference in the relative sizes of the source and background regions) from the counts in the corresponding source bin. Light curves for both observations are plotted in Figure An X-Ray Spectroscopic Study of the SMC X-1/Sk 160 System along with a light curve from Ginga (from an observation described by Woo et al., 1995) in the energy band where that instrument’s energy band overlaps ASCA’s. The Ginga light curve illustrates the “high state” behavior of SMC X-1 with sharp eclipse transitions at phases $`\pm `$0.07. A spectrum was extracted from the out-of-eclipse ASCA observation using all but a few hundred seconds at the end where the count rate started to decrease, possibly due to an onset of the low state (not visible in Figure An X-Ray Spectroscopic Study of the SMC X-1/Sk 160 System). The resulting exposures were 17,226 seconds for SIS0 and 17,026 seconds for SIS1. Spectra were extracted from the entire time of the eclipse observation though that observation extends beyond the time of nominal eclipse. The fact that the flux from SMC X-1 differed by no more than a factor of 3 between times inside and outside of the nominal eclipses indicates that the system was in the low-state — with the compact object blocked by the precessing accretion disk (Wojdowski et al., 1998). Since the X-rays detected during that observation have been reprocessed in the stellar wind, all of the data from that observation, including that from outside of the nominal eclipse, were used in our analysis in order to improve the signal-to-noise ratio of the eclipse spectrum. To further improve the signal the energy channels, which oversample the detector resolution, were grouped so that each channel has at least 50 counts. Due to the high count rate in the out-of-eclipse observation, the errors due to counting statistics were small compared to the systematic errors due to the instrument calibration and the data from the two detectors could not simultaneously be fitted to a single trial spectrum. For this reason, only the data from the SIS0 detector, which may be better calibrated, were used in spectral fits for this observation. The spectrum derived from the out-of-eclipse observation was well fit by a model which consists of a power-law plus two broad gaussian components (one near 0.9 keV and one near 6 keV) absorbed by a small column of cold interstellar material. The fitted model for the photon flux is $$(E)=e^{\sigma (E)N_\mathrm{H}}[f_{\mathrm{pl}}(E)+f_{\mathrm{ga1}}(E)+f_{\mathrm{ga2}}(E)]\times \{\begin{array}{cc}1\hfill & E10\mathrm{k}\mathrm{e}\mathrm{V}\hfill \\ \mathrm{exp}\left(\frac{E10\mathrm{k}\mathrm{e}\mathrm{V}}{15\mathrm{k}\mathrm{e}\mathrm{V}}\right)\hfill & E>10\mathrm{k}\mathrm{e}\mathrm{V}\hfill \end{array}$$ (1) where $`f_{\mathrm{pl}}(E)=K_{\mathrm{pl}}(E/1keV)^\alpha ,`$ (2) $`f_{\mathrm{ga}i}(E)={\displaystyle \frac{K_{\mathrm{ga}i}}{\sigma _i\sqrt{2\pi }}}\mathrm{exp}{\displaystyle \frac{(EE_i)^2}{2\sigma _i^2}},`$ (3) and $`\sigma (E)`$ is the cross-section of interstellar absorption of Morrison & McCammon (1983). While the sensitivity of the ASCA detectors does not extend to the high energy cut-off, it has been measured with Ginga (Woo et al., 1995) and is included here for consistency with the XSTAR calculations. The observed spectra and best fit model are plotted in Figure An X-Ray Spectroscopic Study of the SMC X-1/Sk 160 System. Though they were not used in the spectral fits, the data from the SIS1 detector are included in this plot. There is a discrepancy between the two detectors near 1.3 keV which is comparable to the channel-to-channel variations in observations of the supernova remnant 3C 273 (Orr et al., 1998) which were used for calibration. The best fit values of the parameters for this and the two fits described below are tabulated in Table 1. Approximately the same result was obtained by Stahle et al. (1997) for this data set. The fitted value for the column of neutral hydrogen lies between values estimated by interpolation from neighboring directions in 21-cm emission surveys: $`4.6\times 10^{20}`$ cm<sup>-2</sup> for a galactic survey with $`1^{}`$ resolution (Dickey & Lockman, 1990) and $`4.5\times 10^{21}`$ cm<sup>-2</sup> for a survey of the SMC with $`98\mathrm{}`$ resolution (Stanimirovic et al., 1999). The flux of this model spectrum in the 13.6 eV–13.6 keV band (the band in which luminosity is defined in XSTAR) is $`6.44\times 10^{10}`$ erg cm<sup>-2</sup> s<sup>-1</sup>. For isotropic emission, this corresponds to a luminosity of $`1.9\times 10^{38}`$ erg s<sup>-1</sup> for a distance of 50 kpc. A source with this spectrum would give an on-axis count rate of 1.1 count s<sup>-1</sup> <sup>1</sup><sup>1</sup>1This count rate was computed by comparing the model flux to that of the Crab (Toor & Seward, 1974) in each of the 2–3, 3–5, and 5–10 keV bands and assuming that the count rate of the Crab is 25 count s<sup>-1</sup> in each of these bands. in the RXTE All-Sky Monitor whereas the observed count rate of SMC X-1 in the All-Sky Monitor, at the peak of the high state is approximately 3 count s<sup>-1</sup> (Wojdowski et al., 1998) which implies that the luminosity of SMC X-1 in its high state is approximately $`5\times 10^{38}`$ erg s<sup>-1</sup>. In the eclipse observation, the systematic calibration errors are much smaller than the statistical errors due to the low count rate. Therefore, spectral models were fit simultaneously to both the SIS0 and the SIS1 detectors. To fit the spectrum derived from the observation during eclipse the same model that fit the out-of-eclipse data, scaled down in intensity, was tried. This is the spectrum that would be expected if the intrinsic spectrum of the accreting neutron star did not change between the two observations and the eclipse spectrum were due only to Compton scattering of the X-rays from the source. Relative to the rescaled out-of-eclipse spectrum, the eclipse data show a large excess at energies greater than 4 keV and a small, narrow excess near 1.8 keV, the approximate location of the fluorescence line of neutral silicon (Figure An X-Ray Spectroscopic Study of the SMC X-1/Sk 160 System). An acceptable fit for the eclipse data can be obtained by allowing the parameters of the broad 6 keV feature to vary from their values scaled from the out-of-eclipse spectrum. A separate fit to the data points in the 1.5–2.0 keV range was done using a power law continuum and a narrow emission feature. The energy of this emission feature was found to be $`1.775\pm 0.020`$ keV and the flux was $`1.5\pm 0.5\times 10^5`$ photon s<sup>-1</sup>cm<sup>-2</sup>. This feature is near the location of the detector’s internal fluoresence peak and so we must consider the possibility that this feature is spurious. For the AXAF ACIS CCDs, which are similar to the ASCA CCDs, the probability that an X-ray with energy greater than the Si K edge will cause fluorescence in the detector is 1% just above the K edge and decreases to 0.2% at 4 keV (Prigozhin et al., 1998). The SIS0 screened event list has 4769 photons with energies above the silicon K edge which should result in less than $``$50 fluorescent photons produced in the detector. The measured line flux ($`1.5\times 10^5`$ photon s<sup>-1</sup>cm<sup>-2</sup>) corresponds to 74 photons in this spectrum. A miscalibration of the probability of fluorescence events of order 100% would be necessary to result in a spurious detection of this magnitude. Furthermore, in the data from the out-of-eclipse observation, in which the source has nearly the same spectrum, there is no deviation from the smooth spectral model at this energy. Also, the measured energy of this feature differs from the 1.74 keV of neutral silicon that would come from the detector but is consistent with the energy of the fluorescent line of partially ionized silicon. Still, confirmation of this feature will require better observation. We also tried a reflection model (the “hrefl” model by T. Yaqoob in XSPEC) to fit the $``$6 keV feature — i.e. we assumed that the source spectrum could be described by a power law plus a broad $``$ 0.9 keV component, and that the observed spectrum includes a component due to reflection of that spectrum from cold, neutral gas. For the out-of-eclipse spectrum, we assumed that the “escape fraction” was unity — i.e. the neutron star was directly visible. For the the eclipsed spectrum we assumed the same model normalization — i.e. the neutron star had the same spectrum and luminosity as during the out-of-eclipse observation. The best fit parameters for fits to the reflection models are given in Table 2. The out-of-eclipse spectrum requires a covering fraction which is greater than unity and therefore unphysical. ## 3 Spectral Models of Reprocessed Radiation For the densities lower than approximately 10<sup>12</sup> cm<sup>-3</sup>, with which we are concerned, the heating, cooling, ionization, and recombination are dominated by interactions between single photons from a point source and photons and gas particles (e.g. photo-ionization) and by two-particle interactions (e.g. recombination). If the radiation from the point source is not significantly attenuated, the rate of photon-particle interactions is proportional to $`Ln/r^2`$ and the rate of two-particle interactions is proportional to $`n^2`$. Here, $`L`$ is the luminosity of the point source, $`r`$ is the distance from it, and $`n`$ is the density of hydrogen (neutral and ionized. Spontaneous de-excitations (single particle transitions) happen on timescales short compared to the time between two-body interactions and can therefore be considered not as independant processes, but as part of the two-body interaction which produces the excited state. Therefore, for a given gas composition and a given radiation spectrum, the state of the gas is a function of the ratio of the quantities $`Ln/r^2`$ and $`n^2`$: the ionization parameter, $`\xi L/nr^2`$, defined by Tarter et al. (1969). Reprocessed radiation includes re-emission from recombination, bremsstrahlung, and fluorescence as well as electron scattering. Bremsstrahlung and recombination radiation are two-body processes, so for a given state of the gas, their intensity is proportional to $`n^2`$. Since, for a given gas composition and spectrum of incident radiation, the state of the gas is a function only of $`\xi `$, the volume emission coefficient for these processes may be written: $$j_\nu ^{(1)}=f_\nu (\xi )n^2.$$ (4) Photo-ionization of inner shell electron may result in a raditive cascade which fills the inner shell vacancy. Because the resultant radiation is due to photo-ionization, its contribution to the volume emission coefficient is proportional to $`Ln/r^2`$ and can be written as: $`j_\nu ^{(2)}`$ $`=g_\nu (\xi )Ln/r^2`$ (5) $`=\xi g_\nu (\xi )n^2.`$ (6) The contribution of electron scattering, which we consider to be an absorption followed by emission, like fluorescence, is proportional to $`Ln/r^2=\xi n^2`$. Thus, for a given spectrum of primary radiation and gas composition, the entire spectrum of reprocessed emission, from given volume element $`dV`$ is a function of the ionization parameter scaled by $`n^2dV`$. We used the XSTAR program (v1.43, Kallman & Krolik, 1999) to calculate the spectrum of the X-rays that would be emitted by gas illuminated by radiation with the spectrum of SMC X-1. XSTAR assumes a point radiation source at the center of a spherically symmetric nebula and calculates the state of the gas and the continuum and line radiation emitted by the gas throughout the nebula. We took the spectrum of the central point source to be that derived from the out-of-eclipse observation (Equation 1) but with the absorbing column set to zero. We set the density of the gas at a constant value of $`10^3`$ cm<sup>-3</sup> and did calculations for the metal abundances equal to the solar value (Anders & Grevesse, 1989) and less by 0.5, 1, 1.5, and 2.0 dex. We chose a low gas density so that all optical depths would be negligible. These calculations are similar to Model 1 of Kallman & McCray (1982). While XSTAR includes some effects of electron scattering, it does not explicitly compute the “emission” due to electron scattering. Therefore we added this component to the emissivities computed with XSTAR. Photons much lower in energy than $`m_ec^2=`$511 keV scatter from free electrons with little change in frequency with the Thompson cross-section. Photons may also scatter from bound electrons as if the electrons were free if the binding energies are much less than the photon energies. In gases of astrophysical interest approximately 98% of the electrons are contributed by hydrogen and helium. Since the greatest binding energy in either of these two elements is 54.4 eV, for purposes of Compton scattering of X-rays of greater than approximately 0.5 keV it is a very good approximation to assume that the electron density is a constant fraction of the hydrogen density. For the solar abundance of helium relative to hydrogen there are approximately 1.15 electrons per hydrogen atom. Thus, for X-rays in the energy range 0.5–50 keV, the spectrum of Compton scattered radiation is identical to the spectrum of input radiation and does not depend on the ionization state or temperature of the plasma. We therefore added to the volume emissivity, the quantity $$j_{\nu ,\mathrm{scattered}}=1.15\times \frac{L_\nu }{L}\frac{\sigma _T}{4\pi }\xi n^2$$ (7) The total spectrum of reprocessed X-rays from gas with 1/10 solar metal abundance for several ionization parameters is shown in Figure An X-Ray Spectroscopic Study of the SMC X-1/Sk 160 System. These spectra modified by $`7\times 10^{20}`$ cm<sup>-2</sup> of interstellar absorption and convolved with the ASCA response matrix are shown in Figure An X-Ray Spectroscopic Study of the SMC X-1/Sk 160 System. In the range $`1\mathrm{log}\xi 3`$, the spectrum contains many strong features from recombination to the K shells of astrophysically abundant elements. At lower values of the ionization parameter, astrophysically abundant elements are not ionized up to the K shell and therefore produce spectral features in the X-ray band only by photo-ionization of K-shell electrons and subsequent radiative cascades. At higher values of the ionization parameter, the abundant elements approach complete ionization and the electron temperature approaches a limit. Therefore, the emissivity due to recombination approaches a limit while the continuum emissivity due to Compton scattering continues to increase linearly with $`\xi `$. The absorption cross-section is plotted in Figure An X-Ray Spectroscopic Study of the SMC X-1/Sk 160 System. ### 3.1 Single $`\xi `$ Spectra and the ASCA Eclipse Spectrum The ASCA spectrum of SMC X-1 in eclipse was compared to the convolved spectra of reprocessed radiation over a range of abundances and ionization parameters. None of the reprocessing mechanisms discussed above can reproduce the observed excess in the eclipse spectrum around 6 keV. Therefore we considered only the data for energies less than 4 keV. The eclipse spectra can be fit by reprocessing models with $`\mathrm{log}\xi 3`$ or by models with $`\mathrm{log}\xi 1`$ and metal abundance less than approximately 1/10 of solar (Figure An X-Ray Spectroscopic Study of the SMC X-1/Sk 160 System). In the range $`1<\mathrm{log}\xi <3`$, even for a metal abundance as low as 1/100 of solar, the calculations predict a flux below 1 keV from recombination features that is inconsistent with the observed spectrum (Figure An X-Ray Spectroscopic Study of the SMC X-1/Sk 160 Systemb). For $`\mathrm{log}\xi >3`$ recombination is very weak relative to Compton scattering so the spectrum is insensitive to metal abundance and satisfactory fits may be obtained for abundance as large as solar (Figure An X-Ray Spectroscopic Study of the SMC X-1/Sk 160 Systema). The fluorescent features present in the spectral models for $`\mathrm{log}\xi <1`$ are not as strong relative to the recombination features as in the models for intermediate ionization. This allows the data to be fit with $`\mathrm{log}\xi <1`$ as long as the metal abundance relative to solar is less than approximately 0.2 (Figure An X-Ray Spectroscopic Study of the SMC X-1/Sk 160 Systemc). In contrast to the $`\mathrm{log}\xi >3`$ regime however, the spectrum of reprocessed X-rays contains fluorescent emission lines. The emission line detected at $`1.775\pm 0.020`$ keV may be the fluorescent K$`\alpha `$ line of Si II <sup>2</sup><sup>2</sup>2We describe fluorescence lines according to the charge state of the atom at the time the line is emitted, after photo-ionization of the inner shell electron has occurred — e.g. K shell photo-ionization of neutral iron results in the emission of a Fe II K$`\alpha `$ photon. or some combination of this line and the fluorecent K$`\alpha `$ lines of Si III–V all of which have K$`\alpha `$ lines near 1.74 keV. It may also be due, at least partially, to a higher ionization stage, such as Si IX, which has a fluorescent K$`\alpha `$ line at 1.77 keV. The feature cannot be due to recombination to helium-like silicon (which would produce an emission line at 1.84 keV) because under conditions necessary to produce that line, recombination radiation from oxygen and other elements would produce a large flux below 1 keV which is not seen. The presence of this fluorescence line and the lack of strong recombination features indicate that a significant fraction of the emission comes from gas with $`\mathrm{log}\xi <1`$. The reprocessing spectral model with $`\mathrm{log}\xi =0`$ and metal abundance equal to one-tenth of solar has a single emission line at 1.740 keV. With the model normalization fit to the SMC X-1 elipse spectrum, this line has a flux of $`5.2\times 10^6`$ photon s<sup>-1</sup>cm<sup>-2</sup>. The flux of the observed feature is $`1.5\pm 0.5\times 10^5`$ photon s<sup>-1</sup>cm<sup>-2</sup>. If this line flux is correct, it indicates that the silicon abundance in SMC X-1 is at least two tenths of solar. Except in the range $`1<\mathrm{log}\xi <3`$, the flux from the reprocessing models is dominated by Compton scattering. Therefore, the emission measure ($`n^2V`$) necessary to reproduce a given luminosity with a single $`\xi `$ reprocessing model should be proportional to the inverse of the ionization parameter. To confirm this, the best-fit value of $`\chi ^2`$ was computed on a grid of values of $`\mathrm{log}\xi `$ and the normalization parameter ($`K(n^2V/4\pi D^2)\times 10^{14}\mathrm{cm}^5`$). Contours from these fits are plotted in Figure An X-Ray Spectroscopic Study of the SMC X-1/Sk 160 System. Indeed, the best fits are in a narrow region of parameter space around the line defined by ($`K\xi =1.7`$). To determine what amount of material with $`1<\mathrm{log}\xi <3`$ is allowed by the observed spectra, fits were done to model spectra consisting of reprocessing from gas at two ionization parameters. Both components had metal abundances fixed at one tenth of solar. The first component had an ionization parameter fixed at the the lowest calculated value ($`\mathrm{log}\xi =2.95`$). The ionization parameter and the normalization of the second component were stepped through a grid of values and the normalization of the first component was varied to minimize $`\chi ^2`$. Contours of the minimized $`\mathrm{\Delta }\chi ^2`$ relative to the best fit are plotted in Figure An X-Ray Spectroscopic Study of the SMC X-1/Sk 160 System. As noted above, model spectra for $`\mathrm{log}\xi >3`$ and for $`\mathrm{log}\xi <1`$ have the same shape since the emission is dominated by Compton scattering. For these values of the ionization parameter, good fits can be obtained for normalizations up to a value such that $`K\xi 1.7`$ as above. For $`1<\mathrm{log}\xi <3`$ however, smaller normalizations are allowed. For $`\mathrm{log}\xi =2`$, $`K3\times 10^4`$. For a distance of 50 kpc, this corresponds to an emission measure $`n^2V9\times 10^{57}`$ cm<sup>-3</sup>. ## 4 Spectra From Reprocessing in Model Winds ### 4.1 Spectral simulation algorithm We devised a procedure to calculate the spectrum of X-rays from a central point souce reprocessed in a 3-D matter distribution. The flux received by an observer at a distance $`d`$ from a diffuse source can be written schematically as $$_\nu =\frac{1}{4\pi d^2}j_\nu (𝐱)e^{\tau _\nu (𝐱)}𝑑V$$ (8) where $`𝐱`$ is the spatial vector, $`j_\nu `$ is the volume emission coefficient — the energy output of the gas per unit volume, frequency, time, and solid angle in the direction of the observer. The optical depth $`\tau _\nu (𝐱)`$ between the point of emission and the observer. The optical depth is calculated $$\tau _\nu (𝐱)=_𝐱^{𝐱_{\mathrm{observer}}}\sigma _\nu (𝐱)n(𝐱)𝑑x$$ (9) where $`\sigma _\nu `$, is the cross-section for absorption and scattering out of the line of sight per hydrogen atom. Again, we consider a photon received by the observer as having been “emitted” at the place where it last interacted with the gas. We map the density distribution onto a rectilinear grid such that lines of sight to the observer are parallel to one of the axes. Then, for every grid cell, we calculate the ionization parameter. Then, starting at grid cells opposite the observer, we look up the spectrum of reprocessed emission for that ionization parameter, scale by $`n^2V`$, and then add that to a running total emission spectrum. At the next grid cell we attenuate the emission spectrum from cells behind according to the cross-sections for that ionization parameter scaled by $`nl`$ where $`l`$ is the length of the grid cell, and so on to compute the total spectrum of emission in the direction of the observer. Emission is assumed to come only from gas which is illuminated by the point radiation source. Absorption in unilluminated material is taken to be equal to absorption from gas with the lowest ionization parameter in the XSTAR table. Emission from points of gas which are not visible to the observer (i.e. those behind the companion star) are not included in the summation. The flux received by the observer is equal to this luminosity divided by 4$`\pi `$ times the distance squared. The emission spectrum is thereby converted to a flux and is output to a FITS format XSPEC Table Model (Arnaud, 1995) which is easily imported to the XSPEC spectral fitting program (Arnaud, 1996). and can easily be convolved with instrument response matrices for comparison with observed spectral data. Our assumption that the direct radiation from the compact object is not significantly attenuated in the gas may cause an overestimate of the radiation from circumstellar material with substantial optical depth. The cross-sections plotted in Figure An X-Ray Spectroscopic Study of the SMC X-1/Sk 160 System can be as large as $`10^{21}\mathrm{cm}^2`$ per hydrogen atom. Therefore, this algorithm will begin to fail for column densities of order $`10^{21}\mathrm{cm}^2`$ or greater. The fact that this algorithm does include absorption of the reprocessed radiation on its way from the reprocessing cites to the observer does compensate for this error to some extent however. The only reprocessed radiation that will be seen from a region of large optical depth is from the part of its surface which is both exposed to the radiation source and visible to the observer. The algorithm would not be accurate if the radiation source was surrounded by a small shell of optically thick gas. In this case, the algorithm would calculate emission from distant gas which was actually shadowed by the dense shell. However, if the optically thick material does not subtend a large solid angle about the radiation source, the error due to shadowing should be small. In regions where the density is so high that single grid cells are optically thick, the neglect of absorption by material within the same cell in which it was produced will cause the emission from those optically thick cells to be overestimated by a factor approximately equal to the optical depth of the cells. If the optically thick material subtends only a small solid angle at the radiation source and is not optically thick on length scales much less than one pixel, the algorithm will calculate the spectrum accurately except that the small portion of the emission which is from the dense material will be overestimated by a factor of no more than a few. ### 4.2 Hydrodynamic Simulation The spectrum of reprocessed emission from the Blondin & Woo (1995) hydrodynamic simulation wind was synthesized using the algorithm described in Section 4.1. The density distribution on the spherical grid from the hydrodynamic simulation was interpolated onto a rectilinear grid with similar resolution: 50 grid points along the radius of the simulation, equal to $`1.43\times 10^{11}`$ cm per grid point. No points in the hydrodynamic simulation had densities no larger than approximately $`3\times 10^{11}\mathrm{cm}^3`$ so no pixel had an optical depth significantly greater than one and the overestimate of the contribution from optically thick cell was no greater than a factor of a few. While the gas distribution does have regions of high density near the radiation source, the high density material subtends a small angle in the orbital plane and is mostly confined to the orbital plane so the error in the simulated spectra due to this gas should not be large. The spectral simulation was carried out for X-ray luminosities of 1, 1.7, 3, 6, 10, 17, and 30 times $`10^{38}`$ erg s<sup>-1</sup> for metal abundances equal to solar and less by 0.5, 1.0, 1.5, and 2.0 dex. The distribution of density and ionization parameter (for $`L_X=3\times 10^{38}`$ erg s<sup>-1</sup> for this model is shown in Figure An X-Ray Spectroscopic Study of the SMC X-1/Sk 160 System. The SMC X-1 eclipse spectrum was fit by interpolation on this grid of models. With the distance to SMC X-1 fixed at 50 kpc, reasonable fits are obtained for the luminosity in a narrow range around $`6.4\times 10^{38}`$ erg s<sup>-1</sup> and for abundances less than a few hundredths of solar (Figure An X-Ray Spectroscopic Study of the SMC X-1/Sk 160 System). Though a reasonable fit to the global spectrum can be obtained (Figure An X-Ray Spectroscopic Study of the SMC X-1/Sk 160 System), the lack of a silicon line in the model spectrum indicates that the model is deficient. Furthermore, the best fit metal abundance is very low compared to other measurements of the abundances in the SMC (Westerlund, 1997). Both the reason for the low abundance and for the lack of the silicon feature can be seen in the differential emission measure, plotted in Figure An X-Ray Spectroscopic Study of the SMC X-1/Sk 160 System. For an X-ray luminosity of $`6\times 10^{38}`$ erg s<sup>-1</sup>, the hydrodynamic simulation contains gas with $`\mathrm{log}\xi >3`$ and also some gas with $`1<\mathrm{log}\xi <3`$ but no gas with $`\mathrm{log}\xi <1`$. The presence of gas with $`1<\mathrm{log}\xi <3`$ produces strong recombination emission features and only by setting the metal abundances to be very low, can the calculated spectra be made to agree with observed spectrum. While the neglect in the algorithm of absorption between the X-ray source and the reprocessing point may have caused an overestimate of the recombination emission by a factor of a few, the total emission measure of material with $`2<\mathrm{log}\xi <3`$ in the Blondin & Woo (1995) model is $`1.33\times 10^{59}`$ cm<sup>-3</sup> compared to the lower limit of $`9\times 10^{57}`$ cm<sup>-3</sup> for single components with $`\mathrm{log}\xi `$ in that range derived in the previous section (Figure An X-Ray Spectroscopic Study of the SMC X-1/Sk 160 System). The lack of a silicon emission feature in the model can be explained by the absence of material at low ionization. The presence of the silicon fluorescence feature in the observed spectrum indicates that there exists gas in the wind of SMC X-1 that is more dense than any of the gas in the hydrodynamic simulations. A hydrodynamic simulation with higher spatial resolution might resolve the gas distribution into smaller, denser clumps and move the peak of the emission measure distribution below $`\mathrm{log}\xi =1`$ where it would fluoresce but not emit recombination radiation in the ASCA band. ### 4.3 Absorption of the Direct Radiation We now explore the validity of our approximation that absorption of radiation along lines of sight from the neutron star can be neglected and that the spectrum of reprocessed radiation is a function only of $`\xi `$ and spectrum of the radiation from the neutron star. Examination of the Blondin & Woo (1995) density distribution — plotted in Figure An X-Ray Spectroscopic Study of the SMC X-1/Sk 160 System — shows that the largest column densities occur in dense clumps. Since the densest material has the lowest ionization parameter and material at lower ionization has greater opacity, these are the places where our approximation is most likely to be invalid. The contour denoting the highest density in Figure An X-Ray Spectroscopic Study of the SMC X-1/Sk 160 System denotes a density of $`10^{12}`$ cm<sup>-3</sup>. The distance from the neutron star to the first clump of this density is that of thirteen grid cells which is equal to $`1.86\times 10^{12}`$ cm which implies $`\mathrm{log}\xi =2.26`$. We ran XSTAR with the density equal to $`10^{12}`$ cm<sup>-3</sup>, the luminosity equal to $`6.4\times 10^{38}`$ ergs s<sup>-1</sup> and $`\mathrm{log}\xi `$ at the inner radius equal to 2.26. In runs with these parameters, ionization fronts like those in the optically thick models of Kallman & McCray (1982) formed where the column depth reached approximately $`2\times 10^{23}`$ cm<sup>-2</sup>. In Figure An X-Ray Spectroscopic Study of the SMC X-1/Sk 160 System, spectra of reprocessed radiation are shown for a point before the ionization front and after the ionization front — with the reprocessing spectral model from optically thin gas at the same ionization parameter for comparison. Before the ionization front, the spectra are almost identical for the optically thin case and the optically thick case for a given ionization parameter. After the ionization front, the spectrum of reprocessing is cut off below a few keV. Therefore, for column depths less than $`2\times 10^{23}`$ cm<sup>-2</sup>, absorption along the paths from the radiation source to the reprocessing sites can be ignored. Of the paths which begin at the neutron star, only those which go through the largest and densest clumps have column densities of this magnitude so the error due to the unaccounted for absorption is small. ### 4.4 Spherically Symmetric Winds A spherically symmetric power-law wind distribution of the type derived by Castor et al. (1975) for a radiation driven wind provides another model to test against the observed spectrum. In this type of wind, the density is decribed by $$n(r)=\frac{\dot{M}}{4\pi r^2v_{\mathrm{}}\mu m_\mathrm{p}}(1R_{}/r)^\beta $$ (10) where $`R_{}`$ is the radius of the star, $`\dot{M}`$ is the mass loss rate, $`m_\mathrm{p}`$ is the proton mass, and $`\mu `$ is the mean molecular weight (the number of proton masses per hydrogen atom in the gas, $`1.34`$). In eclipse, very little of the material near the stellar surface is both illuminated by the X-ray source and visible. Therefore the expression in parentheses in Equation 10 is near unity and the density is approximately proportional to $`r^2`$ in the region of interest. If the binary separation is not large compared to the size of the companion, then the ionization parameter does not vary far from the value it would have everywhere if the X-ray source were at the center: $$\xi _0=4\pi v_{\mathrm{}}\mu m_\mathrm{p}L\dot{M}^1$$ (11) The density can then be written in terms of $`\xi _0`$. $$n(r)=\frac{L}{\xi _0r^2}$$ (12) The relation $`K\xi =1.7`$ derived in Section 3.1 must hold approximately for $`\xi =\xi _0`$. In order to estimate $`\xi _0`$, we estimate $`K`$ for a spherical wind. $$K=\frac{1}{4\pi D^2}n^2𝑑V\times 10^{14}$$ (13) and using Equation 12, $`K`$ $`=`$ $`10^{14}(4\pi D^2)^1{\displaystyle _R_{}^{\mathrm{}}}n^2𝑑V`$ (14) $`=`$ $`10^{14}(4\pi D^2)^1\xi _0^2L^2{\displaystyle _R_{}^{\mathrm{}}}r^44\pi r^2𝑑r`$ $`=`$ $`10^{14}D^2\xi _0^2L^2R_{}^{}{}_{}{}^{1}`$ Then, for $`D`$=50 kpc, $`L=3\times 10^{38}`$ erg s<sup>-1</sup>, and $`R_{}=17R_{\mathrm{}}`$, $`K\xi _01.7`$ implies $`\xi _02.4\times 10^4`$ or less if only part of the flux is due to the extended wind. From Equation 11, we find $$\xi _04.5\times 10^3\left(\frac{L}{10^{38}\mathrm{erg}\mathrm{s}^1}\right)\left(\frac{\dot{M}}{10^6M_{\mathrm{}}\mathrm{yr}^1}\right)\left(\frac{v_{\mathrm{}}}{10^3\mathrm{km}\mathrm{s}^1}\right)$$ (15) So wind parameters of typical B-type stars ($`10^7`$$`10^6\dot{M_{\mathrm{}}}\mathrm{yr}^1`$, $`v_t1500\mathrm{k}\mathrm{m}\mathrm{s}^1`$) result a wind at high ionization and does not emit strong recombination features. However, such a spherically symmetric high-ionization wind cannot produce the observed silicon emission feature. A component with $`\mathrm{log}\xi <1`$ must be included. An exponential region of the atmosphere, of the type inferred from extended eclipse transitions, may be a good candidate for such a low ionization region. An atmosphere with a scale height much less than the stellar radius would be illuminated and visible in only a small region. This region has the same distance from the compact object as does the center of the companion star ($`1.95\times 10^{12}`$cm for SMC X-1). For this region to have $`\mathrm{log}\xi <1`$, a density of approximately $`10^{14}`$ cm<sup>-3</sup> is required for an X-ray luminosity of $`5\times 10^{38}`$ erg s<sup>-1</sup>. Woo (1993) fit Ginga spectra during eclipse transitions to absorption by wind models which had the form of Equation 10 but with an expontial region at the base of the wind. At the minimum radius of illumination and visibility, these models have densities no greater than $`5\times 10^{12}`$ cm<sup>-3</sup>. At the stellar surface these models have densities no greater than $`6\times 10^{13}`$ cm<sup>-3</sup>. However, only very near eclipse transitions can material at the stellar surface be both illuminated by the X-ray source and visible to the observer. The distribution of $`\mathrm{log}\xi `$ for the best fit parameters to one of the eclipse transitions is plotted in Figure An X-Ray Spectroscopic Study of the SMC X-1/Sk 160 System for an X-ray luminosity of $`3\times 10^{38}`$ erg s<sup>-1</sup>. ## 5 Conclusions Through comparison of spectra from ASCA data from observations of SMC X-1 during an eclipse and archival data from outside of eclipse, we derive the following conclusions: 1. The X-ray spectrum of SMC X-1 has approximately the same form in eclipse and out of eclipse. This indicates that most of the X-ray reprocessing in the circumstellar matter is due to Compton scattering. 2. The lack of strong recombination features in the eclipse spectrum indicates that most of the gas in the wind of SMC X-1 is either highly ionized ($`\mathrm{log}\xi >3`$) or lowly ionized ($`\mathrm{log}\xi <1`$). Very little material is in intermediate ionization states. 3. We find evidence of a small but significant emission feature near the energy of the fluorescence line of neutral silicon (1.74 keV). The presence of this feature, if confirmed by better measurements, would indicate that a significant amount of gas must have very low ionization. 4. The Blondin & Woo (1995) model of the density distribution derived by 3-D hydrodynamic simulation cannot reproduce the observed eclipse spectrum. It contains a large amount of material with $`1<\mathrm{log}\xi <3`$ which would produce a large flux of recombination radiation. The spectral resolution of ASCA makes these observations very sensitive to these features. However, no recombination features are detected. Also, this model does not predict any material at low enough ionization to produce the observed silicon fluorescence line. A hydrodynamic simulation with higher spatial resolution might resolve smaller, denser clumps and produce an emission measure distribution which would reproduce the observed eclipse spectrum. 5. A smooth, spherically symmetric wind could be sufficiently ionized so as not to emit recombination radiation features which would have been detectable in our observation. However, it is difficult to construct such a wind to be consistent with observation of X-ray absorption in eclipse transitions, and with the observed intensity of the silicon fluorescence line. We thank J. Blondin and J. Woo for making available to us the density distrubution from their hydrodynamic simulation in machine readable format. This work was supported in part by NASA grant NAG5-2540. PSW received support from a NASA Graduate Student Researchers Program Fellowship through Goddard Space Flight Center (NGT5-57).
no-problem/9912/astro-ph9912491.html
ar5iv
text
# Prospects for solar axions searches with crystals via Bragg scattering ## 1 Introduction Introduced twenty years ago as the Nambu–Goldstone boson of the Peccei–Quinn symmetry to explain in an elegant way CP conservation in QCD , the axion is remarkably also one of the best candidates to provide at least a fraction of the Non Barionic Dark Matter of the Universe. Axion phenomenology is determined by its mass $`m_a`$ which in turn is fixed by the scale $`f_a`$ of the Peccei–Quinn symmetry breaking . No hint is provided by theory about where the $`f_a`$ scale should be. A combination of astrophysical, cosmological and nuclear physics constraints restricts the allowed range of viable axion masses into a relatively narrow window: $`10^6\mathrm{eV}<m_a<10^2\mathrm{eV}`$ and $`3\mathrm{eV}<m_a<20\mathrm{eV}`$. The physical process used in axion search experiments is the Primakov effect. It makes use of the coupling $`g_{a\gamma \gamma }`$ between the axion field and the electromagnetic tensor and allows for the conversion of the axion into a photon. This coupling appears automatically in every axion model, and like all the other axion couplings, it is proportional to $`m_a`$, $`g_{a\gamma \gamma }0.19C_{a\gamma \gamma }(m_a/\mathrm{eV})\mathrm{\hspace{0.33em}10}^9\mathrm{GeV}^1`$, where the constant $`C_{a\gamma \gamma }`$ depends on the axion model considered. Two popular models are the GUT–DFSZ axion ($`C_{a\gamma \gamma }=0.75\pm 0.08`$) and the KSVZ axion ($`C_{a\gamma \gamma }=1.92\pm 0.08`$). However, the possibility to build viable axion models with different values of $`C_{a\gamma \gamma }`$ and the theoretical uncertainties involved imply that a very small or even vanishing $`g_{a\gamma \gamma }`$ cannot be in principle excluded. ## 2 Primakov conversion in crystals Axions can be efficiently produced in the interior of the Sun by Primakov conversion of the blackbody photons in the fluctuating electric field of the plasma. Solid state detectors provide a simple mechanism for detecting these axions. Axions can pass in the proximity of the atomic nuclei of the crystal where the intense electric field can trigger their conversion into photons. Due to the fact that the solar axion flux has an outgoing average energy of about 4 keV (corresponding to the temperature in the core of the Sun, $`T10^7K`$) they can produce detectable x–rays in a crystal detector. Depending on the direction of the incoming axion flux with respect to the planes of the crystal lattice, a coherent effect can be produced when the Bragg condition is fulfilled, leading so to a strong enhancement of the signal. Making use of the calculation of the flux of solar axions of Ref. , as well as the cross–section of the process and appropriate cristallographic information, we calculate the expected axion-to-photon conversion count rate in a solid-state detector (See ref. for further details). Some examples of this count rate for several materials and energy windows are shown in figure 1 as a function of time during one day. As expected, the signal presents a strong sub–diary dependence on time, due to the motion of the Sun in the sky. All this information can be used to extract a posible axion signal from a set of experimental data or, in case of a non–appearance of such a signal, to obtain a limit on the axion-photon coupling $`g_{a\gamma \gamma }`$. This process is independent of $`m_a`$ and so are the achievable bounds for $`g_{a\gamma \gamma }`$. This fact is particularly appealing, since other experimental techniques are limited to a more restricted mass range . The method described above has been applied to the 311 days of data obtained by the COSME 0.234 kg germanium detector (which is also being used for Dark Matter detection, as is briefly commented in the Dark Matter review talk in these proceedings) in the Canfranc Underground Laboratory, with a effective threshold of 2.5 keV and a low energy background of 0.7 c/keV/kg/day. With these conditions and despite its lower statistics, we reach a limit $`g_{a\gamma \gamma }<2.8\times 10^9\mathrm{GeV}^1`$ very close to the one obtained by the SOLAX Collaboration which is the (mass independent but solar model dependent) most stringent laboratory bound for the axion–photon coupling obtained so far. ## 3 Future prospects The sensitivity of an axion experimental search can be expressed as the upper bound of $`g_{a\gamma \gamma }`$ which such experiment would provide from the non–appearance of the axion signal, for a given crystal, background and exposure. It is easy to verify that the ensuing limit on the axion–photon coupling $`g_{a\gamma \gamma }^{lim}`$ scales with the background and exposure in the following way: $`g_{a\gamma \gamma }g_{a\gamma \gamma }^{lim}K\left({\displaystyle \frac{\mathrm{b}}{\mathrm{cpd}/\mathrm{kg}/\mathrm{keV}}}\times {\displaystyle \frac{\mathrm{kg}}{\mathrm{M}}}\times {\displaystyle \frac{\mathrm{years}}{\mathrm{T}}}\right)^{\frac{1}{8}}`$ $`\times 10^9\mathrm{GeV}^1`$ (1) where $`M`$ is the total mass and $`b`$ is the average background. The factor $`K`$ depends on the parameters of the crystal, as well as on the experimental threshold and resolution. In order to perform a systematic analysis of the axion–detection capability of crystal detectors, we have applied the technique described in the previous section to several materials. The result is summarized in Table 1, where the limit given by the experiment of Ref. is compared to those attainable with COSME and other running, being installed and planned crystal detector experiments (See for references). In Table 1 a Pb detector is also included, to give an indication of the best improvement that one would expect by selecting heavy materials to take advantage of the proportionality to $`Z^2`$ of the cross section (crystals of PbWO<sub>4</sub> have been considered in the setup of CUORE). In Fig.2 the result of our analysis is compared to the present astrophysical and experimental bounds in the plane $`m_a`$$`g_{a\gamma \gamma }`$. The horizontal thick line represents the constraint $`g_{a\gamma \gamma }<`$3$`\times `$10<sup>-10</sup> GeV<sup>-1</sup> taken from Table 1. The mass intervals $`m_a<10^6`$ eV, $`10^2`$ eV $`<m_a<3`$ eV, $`m_a>20`$ eV are excluded respectively by cosmological overclosure, SN1987A and oxygen excitation in water Cherenkov detectors, the edges of all the excluded regions being subject to many astrophysical and theoretical uncertainties (See for further details about the figure). As shown in the expression of the $`g_{a\gamma \gamma }`$ bound of Eq.(1) the improvement in background and accumulation of statistics is washed out by the 1/8 power dependence of $`g_{a\gamma \gamma }`$ on such parameters. It is evident, then, that crystals have no realistic chances to challenge the globular cluster limit. A discovery of the axion by this technique would presumably imply either a systematic error in the stellar–count observations in globular clusters or a substantial change in the theoretical models that describe the late–stage evolution of low–metallicity stars. On the other hand, the sensitivity required for crystal–detectors in order to explore a range of $`g_{a\gamma \gamma }`$ compatible with the solar limit , appears to be within reach, provided that large improvements of background as well as substancial increase of statistics be guaranteed.
no-problem/9912/astro-ph9912244.html
ar5iv
text
# A THEORETICAL LIGHT-CURVE MODEL FOR THE 1999 OUTBURST OF U SCORPII ## 1. INTRODUCTION U Scorpii is one of the well observed recurrent novae, characterized by the shortest recurrence period ($`8`$ yr), the fastest decline of its light curve (0.6 mag day<sup>-1</sup>), and its extremely helium-rich ejecta (He/H$`2`$ by number; see, e.g., Webbink et al. (1987) for summary). Historically, the outbursts of U Sco were observed in 1863, 1906, 1936, 1979, 1987, and the latest in 1999. Especially, the 1999 outburst was well observed from the rising phase to the cooling phase by many observers (e.g., Munari et al. (1999)), including eclipses (Matsumoto & Kato (1999)), thus providing us with a unique opportunity to construct a comprehensive model of U Sco during the outburst. Our purpose in this Letter is to construct a detailed theoretical model for U Sco based on our light-curve analysis. A part of the method has been described in Hachisu & Kato (1999), in which they explain the second peak of T CrB outbursts, but we again briefly summarize it in §2, and will fully describe it in a separate paper. In §3, by directly fitting our theoretical light curve to the observational data, we derive various physical parameters of U Sco. Discussions follow in §4, especially in relation to the recently proposed progenitor model of Type Ia supernovae (SNe Ia). ## 2. THEORETICAL LIGHT CURVES Schaefer (1990) and Schaefer & Ringwald (1995) observed eclipses of U Sco in the quiescent phase and determined the orbital period ($`P=1.23056`$ days) and the Ephemeris (HJD 2,451,235.777 $`+`$ 1.23056$`E`$) at the epoch of mideclipse. It is very likely that the companion is a slightly evolved main-sequence star (MS) that expands to fill its Roche lobe after a large part of the central hydrogen is consumed. The phase duration ($`\mathrm{\Delta }\phi 0.1`$) of the primary eclipses in quiescent phase indicates an inclination angle of $`i80\mathrm{°}`$ (e.g., Warner (1995)). Our model is shown graphically in Figure 1. We have revised Kato’s (1990) U Sco light-curve models in the following two ways: (1) the opacity has been changed from the Los Alamos opacity (Cox, King, & Tabor 1973) to the OPAL opacity (Iglesias & Rogers (1996)) and (2) the reflection/irradiation effects both of the companion star and the accretion disk are introduced in order to follow the light curve during the entire phase of the outburst. The visual light curve is calculated from three components of the system: (1) the white dwarf (WD) photosphere, (2) the MS photosphere, and (3) the accretion disk surface. ### 2.1. Decay Phase of Novae In the thermonuclear runaway model (e.g., Starrfield, Sparks, & Shaviv 1988), WD envelopes quickly expand to $`10100R_{}`$ or more and then the photospheric radius gradually shrinks to the original size of the white dwarfs. Correspondingly, the optical luminosity reaches its maximum at the maximum expansion of the photosphere and then decays toward the level in the quiescent phase, keeping the bolometric luminosity almost constant. Since the WD envelope reaches a steady-state after maximum, we are able to follow the development by a unique sequence of steady-state solutions as shown by Kato & Hachisu (1994). Optically thick winds, blowing from the WD in the decay phase of novae, play a key role in determining the nova duration because a large part of the envelope mass is quickly blown off in the wind. In the decay phase, the envelope structure at any given time is specified by a unique solution. The envelope mass $`\mathrm{\Delta }M`$ is decreasing because of both the wind mass loss at a rate of $`\dot{M}_{\mathrm{wind}}`$ and the hydrogen shell burning at a rate of $`\dot{M}_{\mathrm{nuc}}`$, i.e., $$\frac{d}{dt}\mathrm{\Delta }M=\dot{M}_{\mathrm{acc}}\dot{M}_{\mathrm{wind}}\dot{M}_{\mathrm{nuc}},$$ (1) where $`\dot{M}_{\mathrm{acc}}`$ is the mass accretion rate of the WD. By integrating equation (1), we follow the development of the envelope and then obtain the evolutionary changes of physical quantities such as the photospheric temperature $`T_{\mathrm{ph}}`$, the photospheric radius $`R_{\mathrm{ph}}`$, the photospheric velocity $`v_{\mathrm{ph}}`$, the wind mass loss rate $`\dot{M}_{\mathrm{wind}}`$, and the nuclear burning rate $`\dot{M}_{\mathrm{nuc}}`$. When the envelope mass decreases to below the critical mass, the wind stops, and after that, the envelope mass is decreased only by nuclear burning. When the envelope mass decreases further, hydrogen shell-burning disappears, and the WD enters a cooling phase. ### 2.2. White Dwarf Photosphere After the optical peak, the photosphere shrinks with the decreasing envelope mass mainly because of the wind mass loss. Then the photospheric temperature increases and the visual light decreases because the main emitting region moves blueward (to UV then to soft X-ray). We have assumed a blackbody photosphere of the white dwarf envelope and estimated visual magnitude of the WD photosphere with a window function given by Allen (1973). The photospheric surface is divided into 32 pieces in the latitudinal angle and into 64 pieces in the longitudinal angle as shown in Figure 1. For simplicity, we do not consider the limb-darkening effect. ### 2.3. Companion’s Irradiated Photosphere To construct a light curve, we have also included the contribution of the companion star irradiated by the WD photosphere. The companion star is assumed to be synchronously rotating on a circular orbit and its surface is also assumed to fill the inner critical Roche lobe, as shown in Figure 1. Dividing the latitudinal angle into 32 pieces and the longitudinal angle into 64 pieces, we have also summed up the contribution of each patch, but, for simplicity, we neglect both the limb-darkening effect and the gravity-darkening effect of the companion star. Here we assume that 50% of the absorbed energy is reemitted from the companion surface with a blackbody spectrum at a local temperature. The original (nonirradiated) photospheric temperature of the companion star is assumed to be $`T_{\mathrm{ph},\mathrm{MS}}=4600`$ K because of $`BV=1.0`$ in eclipse (Schaefer & Ringwald (1995); see also Johnston & Kulkarni (1992)). Two other cases of $`T_{\mathrm{ph},\mathrm{MS}}=5000`$ and 4400 K have been examined but do not show any essential differences in the light curves. ### 2.4. Accretion Disk Surface We have included the luminosity coming from the accretion disk irradiated by the WD photosphere when the accretion disk reappears several days after the optical maximum, i.e., when the WD photosphere shrinks to smaller than the size of the inner critical Roche lobe. Then we assume that the radius of the accretion disk is gradually increasing/decreasing to $$R_{\mathrm{disk}}=\alpha R_1^{},$$ (2) in a few orbital periods, where $`\alpha `$ is a numerical factor indicating the size of the accretion disk and $`R_1^{}`$ is the effective radius of the inner critical Roche lobe for the primary WD component. The surface of the accretion disk absorbs photons and reemits the absorbed energy with a blackbody spectrum at a local temperature. Here, we assume that 50% of the absorbed energy is emitted from the surface while the other is carried into the interior of the accretion disk and eventually brought into the WD. The original temperature of the disk surface is assumed to be constant at $`T_{\mathrm{ph},\mathrm{disk}}=4000`$ K including the disk rim. The viscous heating is neglected because it is much smaller than that of the irradiation effects. We also assume that the accretion disk is axisymmetric and has a thickness given by $$h=\beta R_{\mathrm{disk}}\left(\frac{\varpi }{R_{\mathrm{disk}}}\right)^2,$$ (3) where $`h`$ is the height of the surface from the equatorial plane, $`\varpi `$ the distance on the equatorial plane from the center of the WD, and $`\beta `$ is a numerical factor showing the degree of thickness. We have adopted a $`\varpi `$-squared law simply to mimic the effect of the flaring up of the accretion disk rim (e.g., Schandl, Meyer-Hofmeister, & Meyer 1997) and the exponent does not affect the disk luminosity so much mainly because the central part of the disk is not seen. The surface of the accretion disk is divided into 32 pieces logarithmically and evenly in the radial direction and into 64 pieces evenly in the azimuthal angle as shown in Figure 1. The outer rim of the accretion disk is also divided into 64 pieces in the azimuthal direction and 8 pieces in the vertical direction by rectangles. We have reproduced the light curves in the quiescent phase ($`B`$-magnitude, Schaefer (1990)) by adopting a set of parameters such as $`\alpha =0.7`$, $`\beta =0.30`$, a 50% irradiation efficiency of the accretion disk, and $`i=80\mathrm{°}`$ for the WD luminosity of $`1000L_{}`$ ($`A_B=0.8`$ and $`d=15`$ kpc, or $`A_B=2.8`$ and $`d=6`$ kpc, see below), which is roughly corresponding to the mass accretion rate of $`\dot{M}_{\mathrm{acc}}=2.5\times 10^7M_{}`$ yr<sup>-1</sup> for the $`1.37M_{}`$ WD. ## 3. RESULTS Figure 2 shows the observational points and our calculated $`V`$-magnitude light curve (solid line). To maintain an accretion disk, we assume a mass accretion rate $`\dot{M}_{\mathrm{acc}}=1\times 10^7M_{}`$ yr<sup>-1</sup> during the outburst. To fit the early linear decay phase ($`t110`$ days after maximum), we have calculated a total of 140 $`V`$-magnitude light curves for five cases of the WD mass: $`M_{\mathrm{WD}}=1.377`$, 1.37, 1.36, 1.35, and $`1.3M_{}`$, with various hydrogen content of $`X=0.04`$, 0.05, 0.06, 0.07, 0.08, 0.10, and 0.15 of the envelope, where the metallicity $`Z=0.02`$ is fixed, each for four companion mass of $`M_{\mathrm{MS}}=0.8`$, 1.1, 1.5, and $`2.0M_{}`$. We choose $`1.377M_{}`$ as a limiting mass just before the SN Ia explosion in the W7 model ($`M_{\mathrm{Ia}}=1.378M_{}`$) of Nomoto, Thielemann, & Yokoi (1984). We have found that the early 7 day light curve hardly depends on the chemical composition and the companion mass but is mainly determined by the white dwarf mass. This is because (1) the early-phase light curve is determined mainly by the WD photosphere (Fig. 2, dotted line) and therefore by the wind mass loss rate and (2) the optically thick wind is driven by the strong peak of OPAL opacity, which is due not to helium or hydrogen lines but to iron lines (Kato & Hachisu (1994)). Therefore, the determination of the WD mass is almost independent of the hydrogen content, the companion mass, or the disk configuration. The $`1.37M_{}`$ light curve is in much better agreement with the observations than are the other WD masses. The distance to U Sco is estimated to be 5.4—8.0 kpc, as shown in Figure 2, if we assume no absorption ($`A_V=0`$). Here, we obtain 5.4 kpc for the fit to the upper bound and 8.0 kpc for the fit to the lower bound of the observational points. For an absorption of $`A_V=0.6`$ (Barlow et al. (1981)), we have the distance of 4.1—6.1 kpc, and then U Sco is located 1.5—2.3 kpc above the Galactic plane ($`b=22\mathrm{°}`$). These ranges of the distance are reasonable compared with the old estimates of the distance to U Sco such as 14 kpc (e.g., Warner (1995)). To fit the cooling phase ($`t`$ 30—40 days after maximum), we must adopt the hydrogen content of $`X=0.05`$ among $`X=0.04`$, 0.05, 0.06, 0.07, 0.08, 0.10, and 0.15 for $`M_{\mathrm{WD}}=1.37M_{}`$. This is because the hydrogen content $`X`$ is equivalent to the mass of hydrogen burning in the envelope. Therefore, $`X`$ determines the duration of hydrogen shell burning, i.e., the duration of the midplateau phase. For $`X=0.05`$, the optically thick wind stops at $`t=17.5`$ days, and the steady hydrogen shell-burning ends at $`t=18.2`$ days. This duration of the strong wind phase is very consistent with the BeppoSAX supersoft X-ray detection 19—20 days after the optical peak (Kahabka et al. (1999)) because supersoft X-rays are self-absorbed by the wind itself during the strong wind phase. It should be noted that we do not use this detection of the supersoft X-rays to constrain any physical parameters. Hydrogen shell-burning begins to decay from $`t=18.2`$ days but still continues to supply a part of the luminosity; the rest comes from the thermal energy of the hydrogen envelope and the hot ash (helium) below the hydrogen layer. The thermal energy amounts to several times 10<sup>43</sup> ergs, which can supply a bolometric luminosity of 10<sup>38</sup> erg s<sup>-1</sup> for ten days or so until $`t30`$ days, as seen in Figures 2 and 3. The envelope mass at the peak is estimated to be $`\mathrm{\Delta }M3\times 10^6M_{}`$ for $`M_{\mathrm{WD}}=1.37M_{}`$, $`X=0.05`$, and $`Z=0.02`$; thus, the average mass accretion rate of the WD becomes $`2.5\times 10^7M_{}`$ yr<sup>-1</sup> in the quiescent phase between 1987 and 1999, if no WD matter is dredged up into the envelope. Such high mass accretion rates strongly indicate that the mass transfer is driven by a thermally unstable mass transfer under the constraint that the companion star is a slightly evolved main-sequence star (e.g., van den Heuvel et al. (1992)). The wind carries away about 60% of the envelope mass, i.e., $`1.8\times 10^6M_{}`$, which is much more massive than the observational indication of $`1\times 10^7M_{}`$ in the 1979 outburst by Williams et al. (1981). The residual, $`1.2\times 10^6M_{}`$, can accumulate in the WD. Therefore, the WD can grow in mass at an average rate of $`1.0\times 10^7M_{}`$ yr<sup>-1</sup>. Our model fully reproduces the observational light curve if we choose $`\alpha =1.4`$ and $`\beta =0.30`$ during the wind phase and $`\alpha =1.2`$ and $`\beta =0.35`$ during the cooling phase for $`1.37M_{}`$ WD $`+`$ $`1.5M_{}`$ MS with $`i80\mathrm{°}`$. Since we obtain similar light curves for four companion masses, i.e., 0.8, 1.1, 1.5, and 2.0 $`M_{}`$, we show here only the results of $`M_{\mathrm{MS}}=1.5M_{}`$. It is almost certain that the luminosity exceeds the Eddington limit during the first day of the outburst, because our model cannot reproduce the super Eddington phase. ## 4. DISCUSSION Eclipses in the 1999 outburst were observed by Matsumoto & Kato (1999). The depth of the eclipses is $`\mathrm{\Delta }V0.50.8`$, and it is much shallower than that in quiescent phase ($`\mathrm{\Delta }B1.5`$, Schaefer (1990)). Their observation also indicates almost no sign of the reflection effect by the companion star. Thus, we are forced to choose a relatively large disk radius that exceeds the Roche lobe ($`\alpha 1.4`$) and a large flaring-up edge of the disk ($`\beta 0.30`$). If we adopt a size of the accretion disk that is smaller than the Roche lobe, on the other hand, the companion star occults completely both the accretion disk and the white dwarf surface. As a result, we obtain a deep primary minimum as large as 1.5—2.0 or more. We have tested a total of 140 cases of the set ($`\alpha ,\beta `$), i.e., 14 cases of $`\alpha =0.72.0`$ by $`0.1`$ step each for 10 cases of $`\beta =0.050.50`$ by $`0.05`$ step. The best fit light curve is obtained for $`\alpha =1.4`$ and $`\beta =0.30`$. Then a large part of the light from the WD photosphere is blocked by the disk edge, as shown in Figure 1. The large disk size and flaring-up edge may be driven by the Kelvin-Helmholtz instability because of the velocity difference between the wind and the disk surface. After the optically thick wind stops, photon pressure may drive the surface flow on the accretion disk (Fukue & Hachiya (1999)), and we may expect an effect on the accretion disk similar to what the wind has, but it may be much weaker. Thus, we may have a smaller radius of $`\alpha =1.2`$ but still have $`\beta =0.35`$. A much more detailed analysis of the results will be presented elsewhere. It has been suggested that U Sco is a progenitor of SNe Ia (e.g., Starrfield, Sparks, & Truran 1985) because its WD mass is very close to the critical mass ($`M_{\mathrm{Ia}}=1.38M_{}`$) for the SN Ia explosion (Nomoto et al. (1984)). Hachisu et al. (1999) proposed a new evolutionary path to SNe Ia, in which they clarified the reason why the companion star becomes helium-rich even though it is only slightly evolved from the zero-age main sequence. A typical progenitor of SNe Ia in their WD+MS model, $`M_{1,\mathrm{WD}}=1.37M_{}`$, $`M_{2,\mathrm{MS}}1.3M_{}`$, and $`\dot{M}_{2,\mathrm{MS}}2\times 10^7M_{}`$ yr<sup>-1</sup> just before the SN Ia explosion, has exactly the same system as U Sco. Thus, it is very likely that the WD in U Sco is now growing in mass at an average rate of $`1\times 10^7M_{}`$ yr<sup>-1</sup> toward the critical mass for the SN Ia explosion and will soon explode as an SN Ia if the WD core consists of carbon and oxygen. We are very grateful to many amateur astronomers who observed U Sco and sent their valuable data to VS-NET. We also thank the anonymous referee for many critical comments to improve the manuscript. This research has been supported in part by the Grant-in-Aid for Scientific Research (08640321, 09640325, 11640226, 20283591) of the Japanese Ministry of Education, Science, Culture, and Sports. K. M. has been supported financially as a Research Fellow for Young Scientists by the Japan Society for the Promotion of Science.
no-problem/9912/astro-ph9912529.html
ar5iv
text
# Star Clusters in Local Group Galaxies – Impact of Environment on Their Evolution and Survival ## 1. Introduction Star clusters in Local Group galaxies comprise a wide range of roughly coeval stellar agglomerates ranging from globular clusters to associations, from very metal-poor objects to clusters with solar and higher metallicity, and from ancient populations to embedded young clusters. Three basic types of such clusterings are observed: globular clusters, open clusters, and associations. ### 1.1. Globular Clusters Globular clusters are centrally concentrated, spherical systems with masses of $`4.2<\mathrm{log}M[M_{}]<6.6`$ and tidal radii ranging from $`10`$ to $`100`$ pc. They are bound, long-lived ($`>`$10 Gyr) objects whose lifetimes may extend to a Hubble time or beyond, though due to various efficient destruction mechanisms the presently observed globulars are likely just a lower limit of the initial number of globular clusters. Globular clusters have been observed in all known types of galaxies, but many of the less massive galaxies do not contain globular clusters. Thirteen of the 36 Local Group galaxies have globular clusters. One of the most massive, most luminous ($`M_V`$=$`10.^\mathrm{m}55`$) globular clusters is the old, metal-rich, elliptical globular Mayall ii (or G1) in M31 (Rich et al. 1996). In contrast, the faintest ($`M_V0.^\mathrm{m}2`$), least massive globular currently known is AM-4, a distant old Galactic halo globular cluster (Inman & Carney 1987). ### 1.2. Open Clusters Open clusters have masses of $`3<\mathrm{log}M[M_{}]<5`$ and radii of 1 to 20 pc. Most Milky Way open clusters survive for only $``$200 Myr (Janes, Tilley, & Lyngå 1988), although there are long-lived open clusters with ages of several Gyr (Phelps, Janes, & Montgomery 1994). Loose, extended open clusters are dominant in spiral galaxies like the Milky Way, while the Magellanic Clouds contain a large number of blue, compact, populous clusters. Open clusters were identified in $`42`$% of the Local Group galaxies. None were found in dwarf spheroidals (dSphs), which are dominated by populations older than a few Gyr. The oldest open clusters (e.g., NGC 6791: $`8`$ Gyr; Chaboyer, Green, & Liebert 1999) and the youngest globulars (e.g., Ter 7: $`8`$ Gyr; Buonanno et al. 1998a) in the Local Group overlap in age. The distinction between massive open clusters and low-mass globular clusters is somewhat arbitrary and largely based on age. Bound objects that survive for more than 10 Gyr are generally called globulars even though they may resemble sparse open clusters. ### 1.3. Super Star Clusters Super star clusters are young populous clusters with masses exceeding several $`10^4M_{}`$ within a radius of 1–2 pc. They may be progenitors of globular clusters. The most massive super star clusters are found in interacting and starburst galaxies (e.g., Schweizer & Seitzer 1998, Gallagher & Smith 1999). Super star clusters are often located in giant H ii regions or as nuclear star clusters near the centers of massive galaxies. Some may be progenitors of globular clusters. Not every giant H ii region harbors a super star cluster though. Only a few, low-mass super star clusters are known in the Local Group. Giant H ii regions such as 30 Doradus in the Large Magellanic Cloud (LMC) and NGC 3603 in the Milky Way both contain starburst clusters, which interestingly show evidence for the formation of low-mass stars (e.g., Brandl et al. 1999) despite the presence of many very massive stars such as O3 and main-sequence Wolf-Rayet stars. If the R 136 cluster in 30 Dor can remain bound over a Hubble time it will evolve into a low-mass globular cluster. Some of the populous LMC clusters might survive as well for a Hubble time (Goodwin 1997a). The Quintuplet and Arches clusters near the Galactic center (Figer et al. 1999) are examples for nuclear clusters in the Local Group. They may have formed through colliding giant molecular clouds and are expected to have a limited lifetime due to the strong tidal forces of their environment. ### 1.4. Associations Star formation in giant molecular clouds leads usually to the formation of associations and/or open clusters, which appear to be the major contributors to a galaxy’s field population. Associations are extended, unbound, coeval groups of stars with radii of $`<100`$ pc. They disperse on time scales of $``$100 Myr. Associations mark spiral arms in spiral galaxies. They may be located within or at the edges of shells and supershells in spirals and irregulars. Often they are embedded in hierarchical structures of similar age such as stellar aggregates ($``$250 pc radius) and star complexes ($``$600 pc radius; Efremov, Ivanov, & Nikolov 1987). It seems reasonable to assume that associations were present in all types of galaxies during and after episodes of star formation. In the Local Group associations can be found in all galaxies with current star formation. Both open clusters and associations generally have initial mass functions (IMFs) consistent with a Salpeter slope for high and intermediate-mass stars (Massey, Johnson, & DeGioia-Eastwood 1995a, Massey et al. 1995b). ## 2. Star Clusters in the Local Group The global properties of the star clusters in Local Group galaxies are summarized in Table 1. Within a zero-velocity surface of 1.2 Mpc (Courteau & van den Bergh) around the Local Group barycenter 36 galaxies are currently known. In less than half of these globular or open clusters were detected. While past surveys established that the majority of the least massive galaxies, the dSphs, do not contain star clusters, the census for more massive galaxies is still incomplete. ## 3. Environmental Effects and Interactions Between Galaxies Generally, the number of globular clusters is larger in the more massive galaxies, while the few globular clusters in faint dSphs lead to high specific frequencies $`S_N`$ (i.e., the number of globulars, $`N_{GC}`$, normalized by parent galaxy luminosity; $`S_N=N_{GC}10^{0.4(M_V+15)}`$, Harris & van den Bergh 1981). ### 3.1. Tidal Stripping The orbital decay times of globular clusters in dSph galaxies are of the order of only a few Gyr (Hernandez & Gilmore 1998). While in Sagittarius and Fornax one of the globulars lies near the projected galaxy center, both dSphs show spatially extended globular cluster systems. This as well as the puzzling present-day lack of gas suggests that they underwent significant mass loss (Oh, Lin, & Richer 2000). The detection of extratidal stars suggests that tidal stripping may have reduced some dSphs to as little as 1% of their original mass (Majewski et al. 2000). This might explain the inflated $`S_N`$ measured today. The dSph And I is the least massive galaxy in which a globular cluster has been detected to-date (Grebel, Dolphin, & Guhathakurta 2000a). Its faint, sparse globular resembles Galactic outer halo globulars and is located just beyond the galaxy’s core radius. And I is another good candidate for tidal stripping due to its close proximity to M31 (45–85 kpc, Da Costa et al. 1996). Conversely, some globular clusters may have been stripped from dwarf galaxies by more massive galaxies, or may be the cores of accreted nucleated dwarf galaxies (e.g., Bassino, Muzzio, & Rabolli 1994). Its abundance spread and possible age range suggest that the Milky Way’s most massive globular, $`\omega `$ Cen, may be the result of such an evolution (e.g., Hughes & Wallerstein 2000). ### 3.2. Cluster Formation Triggered By Galaxy Interactions? The Magellanic Clouds interact with each other and with the Milky Way, and stand out through their excess in young populous clusters. No such large numbers are observed in the other, less massive Local Group irregulars, none of which are close to a massive spiral. The field star formation history of the Magellanic Clouds is fairly continuous (Holtzman et al. 1999), but the LMC shows a pronounced peak in cluster formation 1–2 Gyr ago, which coincides with the second last close encounters with the Milky Way (Girardi et al. 1995). Curiously the Small Magellanic Cloud (SMC) does not show a corresponding peak at this age. However, the cluster age distributions of LMC and SMC both peak at 100–200 Myr (Grebel et al. 2000b), which coincides with the last perigalacticon and the last close encounter between the Magellanic Clouds (Gardiner, Sawa, & Fujimoto 1994). This appears to suggest that the recent increases in cluster formation may have been partially interaction-triggered. ## 4. Cluster Destruction Through Intragalactic Environment Gnedin & Ostriker’s (1997) “vital diagrams” summarize the dominant effects of globular cluster destruction (see also Gerhard, these proceedings) in the Milky Way: relaxation, tidal shocks through disk and bulge passages, and dynamical friction as a function of globular cluster mass and half-mass radius. At small Galactocentric distances only fairly massive, compact globular clusters can survive for more than a Hubble time. This correlates well with the observed increase in globular cluster half-light radii with Galactocentric distance (van den Bergh 1994a), while the lack of compact clusters at large distances remains puzzling. Goodwin (1997b) suggests that for a given Galactocentric radius, IMF, and star formation efficiency (SFE), the survival of a cluster depends on its central density at the time of formation (see McLaughlin’s contribution for more on SFEs). The short (0.2 Gyr) lifetime of Milky Way open clusters is believed to be partially due to interactions with giant molecular clouds, which destroy open clusters within the solar radius (Wielen 1991). In the Magellanic Clouds, a less violent dynamical environment, open cluster life expectancies are much longer (median age 0.9 Gyr (SMC) and 1.1 Gyr (LMC); Hodge 1988). ## 5. Cluster Properties as Function of Environment ### 5.1. The Oldest Star Clusters in the Local Group While absolute and relative age determinations for globular clusters are plagued by a number of uncertainties (Stetson, VandenBerg, & Bolte 1996; Sarajedini, Chaboyer, & Demarque 1997), there is growing consensus that Galactic globular clusters show age differences $``$5 Gyr. Age values listed in Table 1 refer to an arbitrarily chosen oldest age of 15 Gyr. The oldest globular clusters in the Milky Way, the LMC (Olsen et al. 1998; Johnson et al. 2000), Sagittarius (Montegriffo et al. 1998), and Fornax (Buonanno et al. 1998b) formed at the same time, while cluster formation began $``$3 Gyr later in the SMC (Mighell, Sarajedini, & French 1998) and M33 (Sarajedini et al. 1998). These differences are not obviously correlated with galaxy mass, type, or location. ### 5.2. Radial Abundance Gradients The globular cluster systems of Local Group spirals show an overall radial abundance gradient in the sense that the central (bulge) regions contain a range of abundances including very metal-rich clusters (Barbuy, Bica, & Ortolani 1998), while the mean metallicities decrease with increasing galactocentric distance. The LMC appears to show similar behavior (Da Costa 2000). ### 5.3. Cluster Shapes and Sizes Due to the weaker tidal field of the LMC, its clusters can retain higher ellipticities longer (Goodwin 1997c). At a given galactocentric distance the LMC globular clusters tend to be larger than Galactic globulars, and in both Milky Way and LMC half-light radii ($`r_h`$) increase with galactocentric distance (van den Bergh 2000 and Sect. 4). Galactic globulars on nearly circular orbits are systematically larger (van den Bergh 1994a), while clusters on retrograde orbits are smaller than other globulars, indicative of preferred tidal stripping (van den Bergh 1994b). One could speculate that the small $`r_h`$ values of Fornax’s globulars indicate that this dSph was initially much more massive. Van den Bergh (1994a) interprets the small $`r_h`$ values as evidence against accretion of globular clusters from disintegrating dSphs by the Galactic halo. In contrast, the large $`r_h`$ of And I’s globular cluster ($``$10 pc) is comparable to that of Galactic outer halo globular clusters (Grebel et al. 2000a). ### 5.4. Stellar Content The horizontal branch (HB) morphologies of inner LMC globular clusters resemble those of inner Galactic halo globulars, while outer LMC globulars show the same second-parameter effects as their Galactic counterparts (Johnson et al. 2000). Two of the old Fornax globular clusters show clear second-parameter HB variations (Smith et al. 1996, Buonanno et al. 1998b). This shows not only that second-parameter globulars in dwarfs can be as old as the oldest Galactic halo globulars, but also that accretion of globular clusters from disintegrating dwarfs (a mechanism proposed to explain the metal-poor, red HB clusters in the outer Galactic halo) would contribute both types of globulars to the Milky Way halo. For a discussion of the metallicity dependence of the blue-to-red supergiant ratio in young Galactic and Magellanic clusters we refer to Langer & Maeder (1995). A possible metallicity dependence of Be star fractions and rotation in young clusters in these galaxies is discussed in Maeder, Grebel, & Mermilliod (1999). ## 6. Summary In about 40% of the Local Group galaxies star clusters have been detected so far, but the census is still incomplete. Several (but not all) Local Group galaxies seem to share a common epoch of the earliest globular cluster formation. The most massive galaxies show a tendency for rapid enrichment in their oldest cluster population near their centers as compared to clusters at larger galactocentric radii. The galactocentric dependence of cluster sizes and HB morphology may be intrinsic to the globular cluster formation process rather than to the accretion of dwarf galaxies. The observed properties of the globular clusters in the least massive dwarfs suggest that their parent galaxies may originally have been substantially more massive. Cluster destruction mechanisms and time scales are a function of galaxy environment and galaxy mass. The most recent enhancement of star cluster formation in the Magellanic Clouds may have been triggered by their close encounter with each other and the Milky Way. #### Acknowledgments. It is a pleasure to acknowledge support by NASA through grant HF-01108.01-98A from STScI, which is operated by AURA, Inc., under NASA contract NAS5-26555. ## References Barbuy, B., Bica, E., Ortolani, S. 1998, A&A, 333, 117 Bassino, L.P., Muzzio, J.C., & Rabolli, M. 1994, ApJ, 431, 634 Brandl, B., Brandner, W., Eisenhauer, F., Moffat, A.F.J., Palla, F., & Zinnecker, H. 1999, A&A, 352, 69 Buonanno, R., Corsi, C.E., Pulone, L., Fusi Pecci, F., & Bellazzini, M. 1998a, A&A, 333, 505 Buonanno, R., Corsi, C.E., Zinn, R., Fusi Pecci, F., Hardy, E., & Suntzeff, N.B. 1998b, ApJ, 501, L33 Chaboyer, B., Green, E.M., & Liebert, J. 1999, AJ, 117, 1360 Courteau, S., & van den Bergh, S. 1999, AJ, 118, 337 Da Costa, G.S. 2000, in IAU Symp. 190, New Views of the Magellanic Clouds, eds. Y.-H. Chu et al. (San Francisco: ASP), 397 Da Costa, G.S., Armandroff, T.E., Caldwell, N., & Seitzer, P. 1996, AJ, 112, 2576 Efremov, Yu.N., Ivanov, G.R., & Nikolov, N.S. 1987, Ap&SS, 135, 119 Figer, D.F., Kim, S.S., Morris, M., Serabyn, E., Rich, R.M., & McLean, I.S. 1999, ApJ, 525, 750 Gallagher, J.S., & Smith, L.J. 1999, MNRAS, 304, 540 Gardiner, L.T., Sawa, T., Fujimoto, M. 1994, MNRAS, 266, 567 Girardi, L., Chiosi, C., Bertelli, G., & Bressan, A. 1995, A&A, 298, 87 Gnedin, O.Y., & Ostriker, J.P. 1997, ApJ, 474, 223 Goodwin, S.P. 1997a, MNRAS, 284, 785 Goodwin, S.P. 1997b, MNRAS, 286, L39 Goodwin, S.P. 1997c, MNRAS, 286, 669 Grebel, E.K., Dolphin, A.E., & Guhathakurta, P. 2000a, in prep. Grebel, E.K., Zaritsky, D., Harris, J., & Thompson, I. 2000b, in IAU Symp. 190, New Views of the Magellanic Clouds, eds. Y.-H. Chu et al. (San Francisco: ASP), 405 Harris, W.E., & van den Bergh, S. 1981, AJ, 86, 1627 Hernandez, X., & Gilmore, G. 1998, MNRAS, 297, 517 Hodge, P.W. 1988, PASP, 100, 576 Holtzman, J.A., et al. 1999, AJ, 118, 2262 Hughes, J., & Wallerstein, G. 2000, AJ, in press (astro-ph/9912291) Janes, K.A., Tilley, C., & Lyngå, G. 1988, AJ, 95, 1122 Langer, N., & Maeder, A. 1995, A&A, 295, 685 Maeder, A., Grebel, E.K., & Mermilliod, J.-C. 1999, A&A, 346, 459 Majewski, S.R., Ostheimer, J.C., Patterson, R.J., Kunkel, W.E., Johnston, K.V., & Geisler, D. 2000 (astro-ph/9911191) Massey, P., Johnson, K.E., & DeGioia-Eastwood, K. 1995, ApJ, 454, 151 Massey, P., Lang, C.C., DeGioia-Eastwood, K., & Garmany, C.D. 1995, ApJ, 438, 188 Mighell, K.J., Sarajedini, A., & French, R.S. 1998, AJ, 116, 2395 Montegriffo, P., Bellazzini, M., Ferraro, F.R., Martins, D., Sarajedini, A., & Fusi Pecci, F. 1998, MNRAS, 294, 315 Johnson, J.A., Bolte, M., Stetson, P.B., Hesser, J.E., & Somerville, R.S. 2000, ApJ, in press (astro-ph/9911187) Oh, K.S., Lin, D.N.C., & Richer, H.B. 2000, in prep. Olsen, K.A.G., Hodge, P.W., Mateo, M., Olszewski, E.W., Schommer, R.A., Suntzeff, N.B., & Walker, A.R. 1998, MNRAS, 300, 665 Phelps, R.L., Janes, K.A., Montgomery, K.A. 1994, AJ, 107, 1079 Sarajedini, A., Chaboyer, B., Demarque, B. 1997, PASP, 109, 1321 Sarajedini, A., Geisler, D., Harding, P., & Schommer, R. 1998, ApJ, 508, L37 Smith, E.O., Neill, J.D., Mighell, K.J., & Rich, R.M. 1996, AJ, 111, 1596 Stetson, P.B., VandenBerg, D.A., & Bolte, M. 1996, PASP, 108, 560 Schweizer, F., Seitzer, P. 1998, AJ, 116, 2206 van den Bergh, S. 1994a, AJ, 108, 2145 van den Bergh, S. 1994b, ApJ, 432, L105 van den Bergh, S. 2000, ApJ, in press (astro-ph/9910243) Wielen, R. 1991, in ASP Conf. Ser. 13, The Formation and Evolution of Star Clusters, ed. K. Janes (San Francisco: ASP), 343
no-problem/9912/cond-mat9912208.html
ar5iv
text
# Formation of heavy quasiparticle state in two-band Hubbard model ## I Introduction The recent discovery of heavy fermion behavior in LiV<sub>2</sub>O<sub>4</sub> uncovers the latent possibilities to explore Kondo physics in the lattice $`d`$-electron systems, which is restricted so far to $`f`$-electron systems containing lanthanide or actinide atoms. The heavy fermion behavior has been widely observed in various measurements, such as specific heat, susceptibility, <sup>7</sup>Li and <sup>51</sup>V NMR, $`\mu `$SR, thermal expansion, quasielastic neutron scattering, resistivity and so on. The low-energy physics is characterized by the large mass enhancement in the specific heat coefficient, $`\gamma 0.21`$ J/V-mol K<sup>2</sup> with the Sommerfeld-Wilson ratio $`R_\mathrm{W}1.71`$. The characteristic temperature of Kondo or spin fluctuation is estimated as $`T^{}30`$ K. With elevated temperature, the magnetic susceptibility approximately follows the Curie-Weiss law, $`C/(T\theta )`$, where the Curie constant $`C`$ is consistent with a $`V^{+4}`$ spin $`S=1/2`$ with $`g`$ factor $`2.23`$, and the negative Weiss temperature ($`\theta =63`$ K) is familiar to $`f`$-electron heavy fermions. The several band structure calculations have been made at present. They have revealed that the octahedral coordination of the oxygen ions around the V atom causes the large splitting of $`d`$ states into $`t_{2g}`$ and $`e_g`$ orbitals. The partially filled $`t_{2g}`$ bands can be described roughly by V-V hopping, and they are well separated by the filled O-$`2p`$-bands and the empty $`e_g`$ bands. Eyert et al. suggest the specific heat enhancement comes from spin fluctuations with the magnetic order suppressed by the geometric frustration. In similar context, spin fluctuation nearby magnetically unstable point in Li<sub>1-x</sub>Zn<sub>x</sub>V<sub>2</sub>O<sub>4</sub> is discussed by Fujiwara et al.. On the other hand, an attempt to map onto the conventional periodic Anderson model (PAM) is made by Anisimov et al.. In the realistic treatment of the trigonal symmetry crystal field, triply degenerate $`t_{2g}`$ orbitals split into the non-degenerate $`A_{1g}`$ and doubly degenerate $`E_g`$ representations of the $`D_{3d}`$ group. Their assertion is that due to the Coulomb interaction among $`d`$ electrons, one electron of the $`d^{1.5}`$ configuration is localized into the $`A_{1g}`$ orbital and the rest partially fills a relatively broad conduction band made from $`E_g`$ orbitals. The idea seems to resolve the enormous differences from the isostructural LiTi<sub>2</sub>O<sub>4</sub>, which has 0.5 $`d`$ electrons per Ti atom. It shows the relatively $`T`$-independent Pauli paramagnetism in susceptibility and the superconducting state below $`T_c=13.7`$ K, which is well described by the BCS theory. The related discussion is also made by Varma. Nevertheless, it is not easy to for $`d`$ electron be localized because of the much larger spatial extent of $`d`$ orbitals than of $`f`$ orbitals, unless the $`A_{1g}`$ orbital is located much deeper than the $`E_g`$ band. In other words, the use of the PAM has no solid ground contrary to a naive expectation. Moreover, the intra-band Coulomb repulsion does not work effectively to make heavy mass since each electron favors to place in different orbitals rather than in the same ones. In this paper, we clarify how a heavily renormalized quasiparticle ground state is realized in the two-band Hubbard model, leading to somewhat different physical picture from that argued in Ref. . By using the slave-boson mean-field approximation, we discuss the importance of the inter-band Coulomb repulsion to make one of the $`d`$ electrons be technically localized and provide a situation similar to that given by the PAM. Since the geometric frustration is expected to prevent a long-range order, we restrict our attention to a paramagnetic ground state. In the same stand, we argue about LiTi<sub>2</sub>O<sub>4</sub> as the case of smaller electron density, $`n_e=0.5`$. ## II Model and Formulation Let us start with two-band Hubbard model, which may capture the broad $`E_g`$ band (called as $`A`$) with the width 2 eV and the narrow $`A_{1g}`$ band (called as $`B`$) with the width 1 eV. The latter is located at 0.1 eV lower than the former due to the trigonal distortion. The $`d`$-$`d`$ Coulomb interactions and the Hund’s-rule coupling are estimated as 3 eV and 1 eV, respectively. The mixing strength between bands must be much smaller than the width of both bands. The Hamiltonian is given by $`H={\displaystyle \underset{k\mathrm{}\sigma }{}}\left[\left(ϵ_k\mathrm{}+E_{\mathrm{}}\mu \right)c_{k\mathrm{}\sigma }^{}c_{k\mathrm{}\sigma }+vc_{k\mathrm{}\sigma }^{}c_{k\overline{\mathrm{}}\sigma }\right]+H_{\mathrm{int}},`$ (1) $`\text{ }H_{\mathrm{int}}={\displaystyle \underset{i\mathrm{}}{}}U_{\mathrm{}}n_i\mathrm{}n_i\mathrm{}+U{\displaystyle \underset{i\alpha \beta }{}}c_{iA\alpha }^{}c_{iA\alpha }c_{iB\beta }^{}c_{iB\beta }`$ (2) $`\text{ }{\displaystyle \frac{J}{4}}{\displaystyle \underset{i\alpha \beta \gamma \delta }{}}c_{iA\alpha }^{}c_{iA\beta }c_{iB\gamma }^{}c_{iB\delta }\stackrel{}{\sigma }_{\alpha \beta }\stackrel{}{\sigma }_{\gamma \delta }.`$ (3) The first term denotes the kinetic energy of conduction electrons for bands $`\mathrm{}`$, in which $`E_{\mathrm{}}=\pm \mathrm{\Delta }/2`$ for $`A`$ and $`B`$ bands, $`\mathrm{\Delta }`$ being the trigonal splitting, and $`\mu `$ the chemical potential. The second is a mixing strength between bands and its k-dependence is neglected for simplicity. The intra- and inter-band Coulomb interactions $`U_{\mathrm{}}`$ and $`U`$ as well as the Hund’s-rule coupling $`J`$ at the site $`i`$ are considered in $`H_{\mathrm{int}}`$, where the intra-band Coulomb interactions are set by $`U_A=U_B=\mathrm{}`$ for simplicity. To solve this Hamiltonian, we introduce slave-boson fields for $`d^0`$-$`d^2`$ states at each site. We associate a boson $`e_i`$ for $`d^0`$ state, $`p_{i\mathrm{}\sigma }`$ for $`d^1`$ states, and $`d_{iSS_z}`$ for $`d^2`$ states labeled by their spin states, $`(S,S_z)`$, respectively. For the uniform solution of the mean-field approximation, we replace these bosons by site-independent $`c`$ numbers. Assuming the paramagnetic ground state, five bosons, $`e=e_i`$, $`p_A=p_{iA\sigma }`$, $`p_B=p_{iB\sigma }`$, $`d_0=d_{00}`$ and $`d_1=d_{1S_z}`$, are involved in calculation. The completeness relation for $`d^0`$-$`d^2`$ states is given by $$I1=0,Ie^2+2\underset{\mathrm{}}{}p_{\mathrm{}}^2+d_0^2+3d_1^2.$$ (4) Since the probabilities for the singlet and the triplet states are given by $`d_0^2`$ and $`3d_1^2`$, respectively, the two-body interactions are rewritten in terms of bosons as $$H_{\mathrm{int}}^{\mathrm{MF}}/N=U\left(d_0^2+3d_1^2\right)\frac{3}{4}J\left(d_1^2d_0^2\right),$$ (5) where $`N`$ is the number of sites. The each species of electrons must satisfy the constraint at each site, $$c_{i\mathrm{}\sigma }^{}c_{i\mathrm{}\sigma }Q_{\mathrm{}}=0,Q_{\mathrm{}}p_{\mathrm{}}^2+\frac{1}{2}\left(d_0^2+3d_1^2\right).$$ (6) In this slave-boson scheme, the hopping term is suppressed as $$c_{k\mathrm{}\sigma }^{}c_{k\mathrm{}\sigma }q_{\mathrm{}}c_{k\mathrm{}\sigma }^{}c_{k\mathrm{}\sigma },c_{kA\sigma }^{}c_{kB\sigma }qc_{kA\sigma }^{}c_{kB\sigma },$$ (7) where the renormalization factors $`q_{\mathrm{}}`$ and $`q`$ are given by $`q_{\mathrm{}}=\stackrel{~}{z}_{\mathrm{}}^2,q=\sqrt{q_Aq_B},\stackrel{~}{z}_{\mathrm{}}=Q_{\mathrm{}}^{1/2}z_{\mathrm{}}\left(1Q_{\mathrm{}}\right)^{1/2},`$ (8) $`z_{\mathrm{}}=ep_{\mathrm{}}+{\displaystyle \frac{1}{2}}\left(d_0+3d_1\right)p_\overline{\mathrm{}}.`$ (9) Note that the Gutzwiller correction $`Q_{\mathrm{}}^{1/2}(1Q_{\mathrm{}})^{1/2}`$ is necessary to reproduce non-interacting limit. Finally, we obtain the mean-field free energy per site with two Lagrange multipliers $`\lambda `$ and $`\lambda _{\mathrm{}}`$, $`F^{\mathrm{MF}}/N`$ $`=`$ $`{\displaystyle \frac{2}{\beta N}}{\displaystyle \underset{k}{}}{\displaystyle \underset{m}{\overset{\pm }{}}}\mathrm{ln}\left(1+e^{\beta \left(\stackrel{~}{E}_{km}\mu \right)}\right)+H_{\mathrm{int}}^{\mathrm{MF}}/N`$ (11) $`+\lambda \left(I1\right)2{\displaystyle \underset{\mathrm{}}{}}\lambda _{\mathrm{}}Q_{\mathrm{}},`$ where the bonding and the anti-bonding ($`m=`$) quasiparticle bands are given by $`\stackrel{~}{E}_{km}={\displaystyle \frac{1}{2}}\left[\stackrel{~}{\xi }_{kA}+\stackrel{~}{\xi }_{kB}+m\sqrt{(\stackrel{~}{\xi }_{kA}\stackrel{~}{\xi }_{kB})^2+4\stackrel{~}{v}^2}\right],`$ (12) $`\text{ }\stackrel{~}{\xi }_k\mathrm{}q_{\mathrm{}}ϵ_k\mathrm{}+E_{\mathrm{}}+\lambda _{\mathrm{}},\stackrel{~}{v}qv.`$ (13) The width of the band $`\mathrm{}`$ and the mixing strength are renormalized by factors $`q_{\mathrm{}}`$ and $`q`$, respectively. The position of the bands are moved up by an amount $`\lambda _{\mathrm{}}`$. Minimizing the free energy with respect to five bosons and two Lagrange multipliers, the set of self-consistent equations are obtained as follows: $`e^2+2{\displaystyle \underset{\mathrm{}}{}}p_{\mathrm{}}^2+d_0^2+3d_1^21=0,`$ (14) $`\overline{n}_{\mathrm{}}2[p_{\mathrm{}}^2+{\displaystyle \frac{1}{2}}(d_0^2+3d_1^2)]=0,`$ (15) $`{\displaystyle \underset{\mathrm{}}{}}{\displaystyle \frac{\overline{ϵ}_{\mathrm{}}}{2}}{\displaystyle \frac{q_{\mathrm{}}}{e}}+v\overline{r}{\displaystyle \frac{q}{e}}+\lambda e=0,`$ (16) $`{\displaystyle \underset{\mathrm{}}{}}{\displaystyle \frac{\overline{ϵ}_{\mathrm{}}}{2}}{\displaystyle \frac{q_{\mathrm{}}}{p_{\mathrm{}^{}}}}+v\overline{r}{\displaystyle \frac{q}{p_{\mathrm{}^{}}}}2(\lambda _{\mathrm{}^{}}\lambda )p_{\mathrm{}^{}}=0,`$ (17) $`{\displaystyle \underset{\mathrm{}}{}}{\displaystyle \frac{\overline{ϵ}_{\mathrm{}}}{2}}{\displaystyle \frac{q_{\mathrm{}}}{d_0}}+v\overline{r}{\displaystyle \frac{q}{d_0}}+(T_0{\displaystyle \underset{\mathrm{}}{}}\lambda _{\mathrm{}}+\lambda )d_0=0,`$ (18) $`{\displaystyle \underset{\mathrm{}}{}}{\displaystyle \frac{\overline{ϵ}_{\mathrm{}}}{2}}{\displaystyle \frac{q_{\mathrm{}}}{d_1}}+v\overline{r}{\displaystyle \frac{q}{d_1}}+3(T_1{\displaystyle \underset{\mathrm{}}{}}\lambda _{\mathrm{}}+\lambda )d_1=0,`$ (19) $`n_e{\displaystyle \underset{\mathrm{}}{}}\overline{n}_{\mathrm{}}=0,`$ (20) where the energies of the singlet and the triplet states are defined by $`T_0=U+3J/4`$ and $`T_1=UJ/4`$. The last equation is responsible for determining the chemical potential for given electron density $`n_e`$. Hereafter we restrict ourselves to the case at zero temperature $`\beta ^1=0`$. At zero temperature the averages of electron densities, mixing amplitude and kinetic energies are given by $`N\overline{n}_{\mathrm{}}{\displaystyle \underset{k\sigma }{}}c_{k\mathrm{}\sigma }^{}c_{k\mathrm{}\sigma }={\displaystyle \underset{km\sigma }{}}\rho _k\mathrm{}^m\theta (\mu \stackrel{~}{E}_{km}),`$ (21) $`N\overline{r}{\displaystyle \underset{k\sigma }{}}c_{kA\sigma }^{}c_{kB\sigma }={\displaystyle \underset{km\sigma }{}}\zeta _k^m\theta (\mu \stackrel{~}{E}_{km}),`$ (22) $`N\overline{ϵ}_{\mathrm{}}{\displaystyle \underset{k\sigma }{}}ϵ_k\mathrm{}c_{k\mathrm{}\sigma }^{}c_{k\mathrm{}\sigma }={\displaystyle \underset{km\sigma }{}}ϵ_k\mathrm{}\rho _k\mathrm{}^m\theta (\mu \stackrel{~}{E}_{km}),`$ (23) with $`\rho _k\mathrm{}^m={\displaystyle \frac{1}{2}}\left[1+m{\displaystyle \frac{\stackrel{~}{\xi }_k\mathrm{}\stackrel{~}{\xi }_{k\overline{\mathrm{}}}}{\sqrt{(\stackrel{~}{\xi }_{kA}\stackrel{~}{\xi }_{kB})^2+4\stackrel{~}{v}^2}}}\right],`$ (24) $`\zeta _k^m={\displaystyle \frac{m\stackrel{~}{v}}{\sqrt{(\stackrel{~}{\xi }_{kA}\stackrel{~}{\xi }_{kB})^2+4\stackrel{~}{v}^2}}}.`$ (25) For simplicity, we use a rectangular density of states (DOS) with a linear dispersion relation, i.e., $`ϵ_k\mathrm{}=W_{\mathrm{}}x/2`$ for $`|x|1`$. Then, The $`k`$-summation in the averages can be carried out analytically with the integration, $`1/N_k1/2_1^1𝑑x`$. ## III Results and Discussions The set of self-consistent equations are solved numerically. In the following we use parameters, $`W_A=2`$, $`W_B=1`$, and $`v=0.2`$ (eV). Figure 1 shows the quasiparticle DOS with and without interactions, $`U_{\mathrm{}}`$, $`U`$ and $`J`$, for $`\mathrm{\Delta }=0.2`$ (eV) and $`n_e=1.5`$ corresponding to LiV<sub>2</sub>O<sub>4</sub>. The bandwidth is renormalized slightly by the intra-band Coulomb repulsion $`U_A=U_B=\mathrm{}`$. However, the renormalization amplitude of the narrower $`B`$ band, $`q_B`$, remains at the order of $`10^1`$ without the inter-band repulsion, since each electron favors to place in different orbitals rather than in the same ones. On the contrary, with the inter-band interactions $`U=3`$ and $`J=1`$ (eV), a strong renormalization takes place and it gives rise to a sharp peak at the Fermi level in the quasiparticle DOS. (Its width is about 40 K.) Note that both the upper and the lower Hubbard bands cannot be argued in the mean-field approximation. The inset in Fig. 1 shows the quasiparticle DOS for the case of $`n_e=0.5`$ corresponding to LiTi<sub>2</sub>O<sub>4</sub>. As it is expected, the renormalization is very weak, $`q_B0.7`$. To elucidate why the inter-band interaction assists the strong renormalization, we discuss the limiting cases for $`n_e=1.5`$. In the absence of $`H_{\mathrm{int}}`$ in eq. (3), a rather large trigonal splitting, i.e., $`\mathrm{\Delta }W_A/4`$ is required to stabilize the integral valence in the $`B`$ band, $`\overline{n}_B`$. While, in the case of the strong repulsion, $`U_A=U_B=\mathrm{}`$, and $`U/W_{\mathrm{}}1`$, the inter-band Coulomb repulsion considerably enhances the difference of electron densities between two bands, $`\mathrm{\Delta }n=\overline{n}_B\overline{n}_A`$, because of the relation $`U\overline{n}_A\overline{n}_B=U[n_e^2(\mathrm{\Delta }n)^2]/4`$. In this case with $`d_0d_1`$ for $`J/U1`$, the renormalization factor vanishes as $$q_B\frac{n_e1}{\overline{n}_B}\frac{1\overline{n}_B}{1\overline{n}_B/2}.$$ (26) Note that in the PAM with $`U=\mathrm{}`$ the hybridization between $`f`$ and conduction electrons is suppressed as $`V^2q_fV^2`$, where $`q_f=(1n_f)/(1n_f/2)`$. We emphasize here that although the mechanism of strong renormalization appears similar to that in the PAM, it is totally a new mechanism that the Kondo limit is dynamically provided by the inter-band Coulomb repulsion. It will be shown below that the integral $`\overline{n}_B`$ can be stabilized even for rather small $`\mathrm{\Delta }`$, which is based on the detailed balance between the kinetic energy and the inter-band Coulomb interaction. On the other hand, in the case of $`J(U,W_{\mathrm{}})`$, the amplitude of triplet state, $`d_1`$, becomes as large as possible. Its maximum is bounded by $`3d_1^2\mathrm{min}(\overline{n}_A,\overline{n}_B)`$. Thus, in order to take the largest value, $`\mathrm{\Delta }n`$ is suppressed. In the limit of large $`J`$, the probabilities for $`d^0`$-$`d^2`$ states, $`(e^2,p_{\mathrm{}}^2,3d_1^2)`$ approach $`(1n_e/2,0,n_e/2)`$, respectively. Namely, the system undergoes a dimerization with a charge order, and the renormalization factor $`q_{\mathrm{}}`$ vanishes since $`p_{\mathrm{}}0`$ in eq. (9). Figure 2 shows $`\overline{n}_B`$ as a function of the trigonal splitting $`\mathrm{\Delta }`$. The intra-band Coulomb interaction $`U_A`$ and $`U_B`$ (circle) somewhat enhances $`\overline{n}_B`$. To stabilize the integral valence, however, $`\mathrm{\Delta }`$ is required as large as that for the case of free electrons (square), i.e., $`\mathrm{\Delta }0.5`$ eV. On the other hand, the inter-band interaction (triangle) works effectively to stabilize integral valence even for small $`\mathrm{\Delta }`$. Note that $`\overline{n}_B`$ is almost unity for $`\mathrm{\Delta }0.1`$ eV. The $`\mathrm{\Delta }`$ dependence of the renormalization factors are shown in Fig. 3. As expected from the above discussion, in the presence of the inter-band interactions (square), $`B`$ band is highly renormalized owing to $`\overline{n}_B1`$, while the intra-band interactions are almost irrelevant up to relatively large $`\mathrm{\Delta }`$ (circle). It is noted that the upper band is almost unrenormalized, while the band mixing is also renormalized considerably, i.e., $`q=\sqrt{q_Aq_B}`$. In order to elucidate how the inter-band interactions reduce renormalization factors, we extract the interaction dependences of the renormalization factors for $`\mathrm{\Delta }=0.2`$ in Fig. 4. It is shown that the inter-band Coulomb interaction (square) effectively reduces renormalization factors. On the other hand, the Hund’s-rule coupling turns out to be almost irrelevant for $`J<U`$ (circle, triangle). This is in contrast to a simple insight that a strong cancellation between the Hund’s-rule coupling and the Kondo exchange coupling considerably reduces the characteristic energy as discussed in Refs. . Discussions are based on the impurity model and hence there is no constraint for the electron density such as $`n_e=\overline{n}_A+\overline{n}_B`$. Since a change of Hund’s-rule coupling requires another change of parameters to restore a given electron density, all parameters must be treated in a self-consistent fashion. This might remove the discrepancy from our result that the Hund’s-rule coupling is almost irrelevant in renormalization. ## IV Summary We have shown that the inter-band Coulomb repulsion plays a significant role to reduce the renormalization factor strongly, while the Hund’s-rule coupling is almost irrelevant in renormalization for $`J<U`$. Even though both bands are rather broad and a splitting between bands is very small, the resultant quasiparticle can have a heavy mass enhanced by about $`10^2`$ times since the Kondo limit is dynamically provided by the inter-band Coulomb repulsion. Although the values obtained by slave-boson approach may be changed quantitatively by more elaborate one, the situation is highly plausible to account for the heavy-fermion behavior in LiV<sub>2</sub>O<sub>4</sub> and the enormous differences from LiTi<sub>2</sub>O<sub>4</sub>. Since the heavily renormalized quasiparticle has been stabilized dynamically by the inter-band Coulomb repulsion, it should couple strongly with orbital fluctuations at higher temperature. The large contribution to the specific heat observed above $`T^{}`$ is presumably related to the orbital fluctuations. At low temperature $`d`$ electron systems generally exhibit a long-range order. If a paramagnetic state survives due to some reasons such as a geometric frustration, one would expect heavy-fermion behavior in numerous $`d`$ electron systems. The resultant quasiparticle holds the possibility of showing fascinating phenomena such as a novel superconductivity mediated by orbital fluctuations. ## ACKNOWLEDGMENTS H. K. would like to thank H. Yamagami for fruitful discussion on electric structure calculation. He has also benefited from conversations with D. L. Cox and S. Kondo. This work was supported by a Grant-in-Aid for the encouragement of Young Scientists and also in part by a Grant-in-Aid for COE Research (10CE2004) from the Ministry of Education, Science, Sports and Culture of Japan. K.M. acknowledges C. M. Varma for leading his attention to this problem.
no-problem/9912/gr-qc9912115.html
ar5iv
text
# Gravitational Collapse and Calogero Model ## Abstract We study the analytic structure of the S-matrix which is obtained from the reduced Wheeler-DeWitt wave function describing spherically symmetric gravitational collapse of massless scalar fields. The simple poles in the S-matrix occur in the Euclidean spacetime, and the Euclidean Wheeler-DeWitt equation is a variant of the Calogero models, which is discussed in connection with conformal mechanics and a quantum instanton. In the previous work we studied quantum mechanically the self-similar black hole formation by collapsing scalar fields and found the wave functions that give the correct semi-classical limit. In this brief report we consider the pole structure of the S-matrix which is obtained from the wave function. The pole corresponds to a solution in an unphysical region, namely, in the Euclidean spacetime with a quantized parameter($`c_0`$), which seems like a quantum version of instanton. The spherically symmetric geometry minimally coupled to a massless scalar field is described by the reduced action in $`(1+1)`$-dimensional spacetime of which the Hilbert-Einstein action is $$S=\frac{1}{16\pi }_Md^4x\sqrt{g}\left[R2\left(\varphi \right)^2\right]+\frac{1}{8\pi }_Md^3xK\sqrt{h}.$$ (1) The reduced action is $$S_{sph}=\frac{1}{4}d^2x\sqrt{\gamma }r^2\left[\left\{{}_{}{}^{(2)}R(\gamma )+\frac{2}{r^2}\left(\left(r\right)^2+1\right)\right\}2\left(\varphi \right)^2\right],$$ (2) where $`\gamma _{ab}`$ is the $`(1+1)`$-dimensional metric. The spherical spacetime metric is $$ds^2=2dudv+r^2d\mathrm{\Omega }_2^2,$$ (3) where $`d\mathrm{\Omega }_2^2`$ is the usual spherical part of the metric, and $`u`$ and $`v`$ are null coordinates. The self-similarity condition is imposed such that $$r=\sqrt{uv}y(z),\varphi =\varphi (z),$$ (4) where $`z=\frac{+v}{u}=e^{2\tau }`$, $`y`$ and $`\varphi `$ depend only on $`z`$. We introduce another coordinates $`(\omega ,\tau )`$ as $$u=\omega e^\tau ,v=\omega e^\tau ,$$ (5) $$ds^2=2\omega ^2d\tau ^2+2d\omega ^2+\omega ^2y^2d\mathrm{\Omega }_2^2.$$ (6) The classical solutions of the field equations were obtained by Roberts , and studied in connection with gravitational collapse by others . Classically black hole formation was only allowed in the supercritical cases ($`c_0>1`$), but even in the subcritical situation there are quantum mechanical tunneling processes to form a black hole of which the probability is semiclassically calculated . In our previous work we quantized the system canonically with the ADM formulation to obtain the Wheeler-DeWitt equation for the quantum black hole formation $$\left[\frac{1}{2K}\frac{^2}{y^2}\frac{1}{2Ky^2}\frac{^2}{\varphi ^2}K\left(1\frac{y^2}{2}\right)\right]\mathrm{\Psi }(y,\varphi )=0,$$ (7) where $`K\frac{m_p^2}{\mathrm{}}\frac{\omega _c^2}{2}`$ plays the role of a cut-off parameter of the model, and we use a unit system $`\mathrm{}=1`$, $`m_p=1`$, and $`c=1`$. The wave function can be factorized to the scalar and gravitational parts, $$\mathrm{\Psi }(y,\varphi )=\mathrm{exp}\left(\pm iKc_0\varphi \right)\psi (y).$$ (8) Here the scalar field part is chosen to yield the classical momentum $`\pi _\varphi =\pm Kc_0`$, where $`c_0`$ is the dimensionless parameter to determining the supercritical ($`c_0>1`$), the critical ($`c_0=1`$), and the subcritical($`1>c_0>0`$) collapse. Now the Wheeler-DeWitt equation becomes an ordinary differential equation $$\left[\frac{1}{2K}\frac{d^2}{dy^2}+\frac{K}{2}\left(2y^2\frac{c_0^2}{y^2}\right)\right]\psi (y)=0.$$ (9) The solution describing the black hole formation was obtained in the Ref.: $$\psi _{BH}(y)=\left(\mathrm{exp}\frac{i}{2}Ky^2\right)\left(Ky^2\right)^\mu _{}M(a_{},b_{},iKy^2),$$ (10) of which the asymptotic form at the spatial infinity is $$\psi _{BH}(y)\mathrm{\Gamma }(b_{})\left[\frac{e^{i\pi a_{}}}{\mathrm{\Gamma }(a_+^{})}\left(iKy^2\right)^{\mu _{}a_{}}e^{\frac{i}{2}Ky^2}+\frac{\left(iKy^2\right)^{\mu _{}a_+^{}}}{\mathrm{\Gamma }(a_{})}e^{\frac{i}{2}Ky^2}\right].$$ (11) Here $`M`$ is the confluent hypergeometric function and $$a_\pm =\frac{1}{2}\pm \frac{i}{2}(QK),b_{}=1iQ,\mu _{}=\frac{1}{4}\frac{i}{2}Q$$ (12) with $$Q=\left(K^2c_0^2\frac{1}{4}\right)^{1/2}.$$ (13) The S-matrix component describing the reflection rate is $$S=\frac{\mathrm{\Gamma }(a_+^{})}{\mathrm{\Gamma }(a_{})}\frac{(iK)^{a_{}a_+^{}}}{e^{i\pi a_{}}},$$ (14) from which we obtain the transmission rate for black hole formation $`{\displaystyle \frac{j_{trans}}{j_{in}}}`$ $`=`$ $`1|S|^2`$ (15) $`=`$ $`1{\displaystyle \frac{\mathrm{cosh}\frac{\pi }{2}(Q+K)}{\mathrm{cosh}\frac{\pi }{2}(QK)}}e^{\pi Q},`$ (16) where $`\left|\mathrm{\Gamma }\left(\frac{1}{2}+ix\right)\right|^2=\pi /\mathrm{cosh}(\pi x)`$ is used. Eq. (15) gives the probability of black hole formation for the supercritical, critical, and subcritical $`c_0`$-values. In this brief report we consider the analytic structure of the S-matrix : It is an analytic function of $`Q`$ and $`K`$ with simple poles which can be explicitly shown as $$S=\underset{N=0}{\overset{\mathrm{}}{}}\frac{1}{QK+i(2N+1)}\left(\frac{2ie^{\frac{\pi }{2}KiK\mathrm{ln}K}}{N!\mathrm{\Gamma }(NiK)}\right).$$ (17) The poles reside in the unphysical region of the parameter space of $`Q`$ and $`K`$: $$Q=\sqrt{K^2c_0^2\frac{1}{4}}=Ki(2N+1),N=0,1,2,\mathrm{}.$$ (18) For physical processes of gravitational collapse there can not be poles because $`K`$ and $`c_0`$ are real valued. In ordinary quantum mechanics, the poles of S-matrix occur at the bound state , and in relativistic scattering at the resonances or the Regge poles . In our case we can not identify the poles with such bound states or resonances. For physical interpretation of the poles we note that the replacement $$K=iK_E$$ (19) is, in effect, the change from the Lorentzian metric to the Euclidean one, and the Wheeler-DeWitt equation in the Euclidean sector becomes $$\left[\frac{1}{2K_E}\frac{d^2}{dy^2}+\frac{K_E}{2}\left(y^2+\frac{c_0^2}{y^2}2\right)\right]\psi _E=0.$$ (20) Notice that this is a variant of Calogero models with the Calogero-Moser hamiltonian , but the energy eigenvalue is fixed, and only a quantized $`c_0`$ is allowed. The solution to the equation is of polynomial type $$\psi _E=e^{\frac{1}{2}K_Ey^2}\left(y\sqrt{2K_E}\right)^{K_E2N\frac{1}{2}}\left(\underset{m=0}{\overset{N}{}}a_m\left(\sqrt{2K_E}y\right)^m\right),$$ (21) $$a_{m+2}=\frac{m2N}{(m+2)(m+2K_E4N)}a_m.$$ (22) Here we use the normalizability condition of the wave function $`\psi _E`$, which gives the quantization of $`c_0`$ as $$K_E^2c_0^2+\frac{1}{4}=\left(K_E2N1\right)^2,$$ (23) or $$c_0^2=\left(1\frac{2N1}{K_E}\right)^2\frac{1}{4K_E^2}.$$ (24) The condition (23) is identical to the pole position of the S-matrix with $`K=iK_E`$ given in (18). The quantum solution (21) is analogous to an instanton in the sense it is a solution in the Euclidean sector. However, it is not an instanton which is strictly a classical solution. With instantons one can semiclassically evaluate the probability of tunneling process, while our solutions (21) provide the quantum probability through the pole contribution to the matrix as in (17). The correspondence between the poles and the Euclidean polynomial solutions breaks down for large $`N`$. While the poles contribute for all $`N`$ without limit, the normalizable Euclidean solutions exist only for $`N<\frac{K_E}{2}`$. The polynomial solutions for large $`N`$ are well defined, but are not normalizable. We have not yet understood these nonnormalizable solutions. One notable point of the Euclidean Wheeler-DeWitt equation is that it is a Calogero type system which has been extensively discussed in connection with the motion of a particle near the Reissner-Nordström black hole horizon . A surprising connection between black holes and conformal mechanics of De Alfaro, Fubini and Furlan (DFF) has been established . By studying the conformal quantum mechanics DFF suggested to solve the problem of motion to use a compact operator $`L_0`$ given as $$L_0=\frac{a}{4}\left(\frac{x^2}{a^2}+p^2+\frac{g}{x^2}\right),$$ (25) which is essentially identical to the Wheeler-DeWitt hamiltonian (20). The presence of conformal mechanics both in the Reissner-Nordström black hole and in the gravitational collapse forming black holes might suggest some deep nature of gravity in connection with conformal theory. As a final remark we consider the classical field equations corresponding to the poles of the S-matrix. The relevant equations are $$\frac{d\varphi }{d\tau }=\frac{c_0}{y^2},$$ (26) $$\left(\frac{dy}{d\tau }\right)^2=K^2\left(2+y^2+\frac{c_0^2}{y^2}\right),$$ (27) where in the Lorentz metric spacetime $`c_01i(2N+1)/K`$, for large $`K`$. The complex $`c_0`$ implies complex $`\frac{d\varphi }{d\tau }`$, which may be imagined as a bound state like complex momentum in quantum mechanics. The complex $`\frac{dy}{d\tau }`$ is difficult to understand unless one considers complex spacetime metric. In the Euclidean case ($`K=iK_E`$) these classical equations are same as those equations with quantized $`c_0`$ in the tunneling region in Ref. . It is beyond the scope of our present work to investigate the role of complex spacetime in gravitational collapse. ###### Acknowledgements. This work was supported in parts by BSRI Program under BSRI 98-015-D00054, 98-015-D00061, 98-015-D00129, and by KOSEF through the CTP of Seoul National University.
no-problem/9912/nucl-th9912028.html
ar5iv
text
# Effect of Nuclear Deformation on 𝐽/𝜓 Suppression in Relativistic Nucleus-Nucleus Collisions ## Abstract Using a hadron-string cascade model, JPCIAE, we study the effect of nuclear deformation on $`J/\psi `$ suppression in the collision of uranium nuclei at 200A GeV/c. We find that the $`J/\psi `$ survival probability is much smaller if the major axes of both deformed nuclei are along the beam direction than if they are perpendicular to the beam direction. PACS numbers: 25.75.Dw, 24.10.Lx, 24.85.+p, 25.75.Gz Because of color screening, $`J/\psi `$ is expected to dissociate in the quark-gluon plasma (QGP) formed in relativistic heavy ion collisions . The resulting suppression of $`J/\psi `$ production in these collisions has been one of the most studied signals for the formation of the quark-gluon plasma. Experiments at CERN have indeed shown that $`J/\psi `$ production is reduced in both proton-nucleus and heavy ion collisions compared to that expected from the superposition of proton-proton collisions at same energies. For collisions involving light projectiles such as p-A , O-U, and S-U collisions , conventional mechanisms of $`J/\psi `$ absorption by nucleons and comovers seem to be sufficient in accounting for the measured suppression. On the other hand, in collisions with heavy projectiles such as the Pb-Pb collision , whether the measured anomalously large $`J/\psi `$ suppression in central collisions can be explained by hadronic absorption alone is still under debate , and explanations based on QGP effects have also been proposed . To study microscopically $`J/\psi `$ production and absorption in heavy ion collisions, transport models have been used . Although there are differences among them, it has been commonly found that the explanation of anomalous J/$`\psi `$ suppression in Pb-Pb collisions needs to introduce new mechanism besides hadronic absorptions. It has been found in Refs. that the inclusion of dissociation of the pre-$`J/\psi `$ $`c\overline{c}`$ state by the color electric field in the initial dense matter can also explain the anomalous $`J/\psi `$ suppression in Pb-Pb collisions. Whether a QGP is formed in heavy ion collisions at CERN SPS is still needed to have further studies. One suggestion is to study $`J/\psi `$ production in collisions of deformed nuclei as the large spatial anisotropy created even in the central collisions of these nuclei offers the possibility to study the mechanisms for $`J/\psi `$ suppression from their final azimuthal distribution . Also, it has been shown in Ref. that for collisions of deformed nuclei at AGS energies, the maximum energy density reached in the initial stage is higher (about 38%) and also lasts longer if the major axes of both nuclei are along the beam direction (tip-tip) than if they are perpendicular to the beam direction (body-body). In high energy density matter, $`J/\psi `$ is dissociated due to either color screening if the matter is a quark-gluon plasma or the color electric field if it is a string matter. In both cases, we expect that $`J/\psi `$ suppression will be more appreciable in the tip-tip collision than in the body-body collision. In this article, we shall report the results of our study of this effect in U-U collisions at SPS energies using a hadron-string cascade model, JPCIAE , which has been shown to give a satisfactory description of the $`J/\psi `$ suppression data from collisions of spherical nuclei at SPS energies. As we shall show below, because of increasing energy density and collision time, a more pronounced $`J/\psi `$ suppression due to the color electric dissociation is indeed seen in the tip-tip collision than in the body-body collision. The JPCIAE model is an extension of the LUND string model to include $`J/\psi `$ production and absorption. In this model, a nucleus-nucleus collision is depicted as a superposition of hadron-hadron collisions. If the center-of-mass energy of a hadron-hadron collision is larger than certain value, e.g., $``$4 GeV, the PYTHIA routines are called to describe this interaction. Otherwise, it is treated as a conventional two-body interaction as in the usual cascade model . Furthermore, for hadron-hadron collisions with center-of-mass energy above 10 GeV, a $`J/\psi `$ is produced using the PYTHIA routines through the reaction $$g+gJ/\psi +g,$$ (1) where $`g`$ denotes gluons in a hadron. Final-state interactions among produced particles and the participant and spectator nucleons are taken into account by the usual cascade model. Mechanisms for $`J/\psi `$ suppression in the JPCIAE model include the hadronic absorption by both baryons and mesons, the energy degradation of leading nucleons, and the dissociation of the $`J/\psi `$ precursor state of $`c\overline{c}`$ pair in the color electric field of strings . The total $`J/\psi `$ suppression factor in JPCIAE is thus given by $$S_{\mathrm{total}}^{J/\psi }=S_{\mathrm{abs}}^{J/\psi }\times S_{\mathrm{deg}}^{J/\psi }\times S_{\mathrm{dis}}^{J/\psi },$$ (2) where $`S_{\mathrm{abs}}`$, $`S_{\mathrm{deg}}`$, and $`S_{\mathrm{dis}}`$ denote, respectively, the suppression factor due to the above three mechanisms. We note that the first two mechanisms have also been employed recently in Ref. to study $`J/\psi `$ production in proton-nucleus collisions at the Fermilab energies. Details of the JPCIAE model can be found in Refs. . For the U-U collision, the initial distribution of the projectile and target nucleons is assumed to be uniform inside an ellipsoid with a major semi-axis $`a=R(1+2\delta /3)=8.4`$ fm and a minor semi-axis $`b=R(1\delta /3)=6.5`$ fm if we use a deformation parameter $`\delta =0.29`$ and an equivalent spherical radius $`R=7.0`$ fm . Other parameters in the model are kept the same as before , i.e., $`\tau =1.2`$ fm/c for the formation time of produced particles, $`\sigma _{J/\psi B}^{\mathrm{abs}}=6`$ mb and $`\sigma _{J/\psi M}^{\mathrm{abs}}=3`$ mb for the $`J/\psi `$ absorption cross sections by baryons and mesons, respectively, and $`c_s=6.0\times 10^7`$ GeV<sup>-1</sup> for the effective color electric dissociation coefficient which was determined from fitting the $`J/\psi `$ suppression data from the Pb-Pb collision at 158A GeV/c . In Fig. 1, we show by solid circles the calculated total $`J/\psi `$ suppression factor in central U-U collisions at 200A GeV/c for different orientations of the colliding nuclei. The results labeled by body-body (B-B), tip-body (T-B), and tip-tip (T-T) correspond, respectively, to collisions in which both minor, one major and one minor, and both major axes of the projectile and target nuclei are parallel to the beam direction. For comparison, we also show the results from treating both nuclei as spherical (S-S). We see that the $`J/\psi `$ survival probability decreases as the orientation changes from B-B to S-S, to T-B, and to T-T. This result can be understood qualitatively from the dependence of the passing time between the two colliding nuclei and the number of participant nucleons on their orientation . For central collisions, while the number of participant nucleons is essentially the same for the four collision geometries, the nuclear passing times are, however, different. For the B-B, S-S, T-B, and T-T collisions, the nuclear passing times are approximately given by $`t^{BB}=2b/(\beta \gamma )`$, $`t^{SS}=2R/(\beta \gamma )`$, $`t^{TB}=(a+b)/(\beta \gamma )`$, and $`t^{TT}=2a/(\beta \gamma )`$, respectively. In the above, $`\beta `$ and $`\gamma `$ are, respectively, the velocity and Lorentz factor in the nucleon-nucleon center-of-mass frame. Relative to the passing time for the collision of spherical nuclei, the following ratio is obtained: $`t^{BB}:t^{SS}:t^{TB}:t^{TT}=0.93:1.0:1.07:1.2`$. The 28% decrease in the $`J/\psi `$ suppression factor for the T-T collision compared to that for the B-B collision, as shown in Fig. 1, is close to the 30% difference between the passing time in these collisions. The effect of deformation on each of the $`J/\psi `$ suppression mechanisms is also shown in Fig. 1 by triangles for $`S_{\mathrm{abs}}`$, inverted triangles for $`S_{\mathrm{deg}}`$, and squares for $`S_{\mathrm{dis}}`$. It is seen that the hadronic absorption is increased by about 21% in the T-T collision than in the B-B collision. Effects due to the energy degradation of leading nucleons only leads to a 5% more suppression in the T-T than in the B-B collision. This small deformation effect results from the fact that $`J/\psi `$’s are mainly produced in first nucleon-nucleon collisions. No deformation effect is seen from the dissociation by the effective color electric field of strings. That is because the dissociation probability of $`J/\psi `$ precursor state of $`c\overline{c}`$ pair in initial dense string matter is assumed to be depending on the energy, centrality and size of collision system as $$P_d=c_s\sqrt{s_{\mathrm{NN}}}e^{(b/R_L)^2}AB$$ (3) ($`S_{\mathrm{dis}}^{J/\psi }`$=1-$`P_d`$), where $`\sqrt{s_{\mathrm{NN}}}`$ being the initial center-of-mass energy of two colliding nucleons, $`b`$ is the impact parameter of the nucleus-nucleus collision, and $`R_L`$ is the radius of the larger of the projectile and target nuclei with atomic number $`A`$ and $`B`$, respectively . The above dissociation probability is proposed based on the continuous excitation picture of strings in LUND model. The color electric field is built up along a string through binary nucleon-nucleon (or nucleon-string, string- string) collisions. The more such collisions a string is experienced, the stronger color electric field will be formed along the string, thus the more likely a $`J/\psi `$ would be dissociated by such a color electric field. In above calculations we have used the same dissociation coefficient in Eq. (3) , irrespective of the energy density reached in initial string matter might be different among the orientations of the colliding nuclei. To include the effect of nuclear deformation on $`J/\psi `$ absorption due to the color electric field in the initial dense matter, we need to take into account the dependence of the dissociation coefficient $`c_s`$ on the collision geometry. To explore this dependence, we first show in Fig. 2 the energy density determined from the JPCIAE model for a spherical volume with a radius of 2 fm and located at the center of the target nucleus as a function of time, which starts when the first nucleon-nucleon collision occurs. The energy density reached in the T-T collision is about 35% higher than in the B-B collision. We thus multiply the effective color electric dissociation coefficient $`c_s`$ used previously for collisions of spherical nuclei by the relative increase in the maximum energy density, i.e., 1.2, 1.07, and 0.93 for the T-T, T-B, and B-B collisions, respectively. Results from using the modified color dissociation coefficients are shown in Fig. 3. We see that this leads to a more pronounced dependence of the color dissociation mechanism on the nuclear orientation than that of both the nuclear absorption and energy degradation effects. As a result, $`J/\psi `$ suppression in the T-T collision is now twice stronger than in the B-B collision. This will be useful in verifying the formation of a non-hadronic dense matter in the initial stage of relativistic nucleus-nucleus collisions once the orientation of the colliding nuclei is known. In summary, a hadron and string cascade model, JPCIAE, has been used to study the effect of nuclear deformation on $`J/\psi `$ suppression in U-U collisions at 200A GeV/c. A 35% higher initial energy density and a factor of two more $`J/\psi `$ suppression are found in collisions if the major axes of both nuclei are along the beam direction than if they are perpendicular to the beam direction. Moreover, we have found a much more pronounced deformation effect on $`J/\psi `$ suppression by the color electric dissociation than by the hadronic absorption and the energy degradation of leading nucleons. The study of $`J/\psi `$ suppression in collisions of deformed nuclei will thus help find the signature for the formation of a quark-gluon plasma in relativistic nucleus-nucleus collisions. We thank Che-Ming Ko, Bao-An Li and Bin Zhang for useful discussions and Torbjörn Sjöstrand for detailed instructions on using the PYTHIA programs. BHS would also like to thank Joe Natowitz for the hospitality during his stay at the Cyclotron Institute of Texas A&M University. This work is supported in part by the National Science Foundation Grant No. PHY-9870038, the Department of Energy Grant DE-FG03-93-ER40773, the Robert A. Welch Foundation Grant No. A-1358, and the Texas advanced Research Program Grant No. FY97-010366-068. BHS is also partially supported by the Natural Science Foundation of China and Nuclear Industry Foundation of China.
no-problem/9912/cond-mat9912189.html
ar5iv
text
# First-Principles Study of Carbon Monoxide Adsorption on Zirconia-Supported Copper ## I INTRODUCTION Oxide-supported transition metals have received much attention due to their utility in automotive catalysis and other catalytic processes. Use of an oxide as a support material for transition metals has clear economic benefits because this reduces the amount of costly noble metals (e.g., Rh, Pt) that need to be used in functioning catalysts. Moreover, the ability of the support material to contribute oxygen to chemical reactions can significantly enhance many catalytic mechanisms. Zirconium dioxide (or zirconia) is a technologically important catalytic support medium. This material has many other applications, as well, including gas sensors, solid fuel cells, and high durability coatings. Copper has recently been identified as an important catalytic agent for NO reduction and CO oxidation. For instance, Cu-ion exchanged zeolites such as Cu/ZSM-5, alumina-supported CuO and ZrO<sub>2</sub>-supported copper all exhibit high activity for the catalytic promotion of NO<sub>x</sub> reduction by CO (forming N<sub>2</sub> and CO<sub>2</sub>). An attractive quality of the Cu/ZrO<sub>2</sub> system is its unusually high catalytic activity for the NO-CO reaction at very low temperatures (100-200C). Because of the interest in Cu/ZrO<sub>2</sub> as a part of the next generation of automotive catalysts, we have performed a series of ab initio calculations to study the energetics of molecular adsorption on this surface. Although there have been previous theoretical studies of bulk ZrO<sub>2</sub> and a recent thorough investigation of ZrO<sub>2</sub> surfaces, this investigation is the first to study molecular adsorption onto metal films on this surface. To the best of our knowledge, this is also the first ab initio study of oxide-supported metal chemisorption. A monolayer of copper was chosen for several reasons. The monolayer geometry can be conveniently determined theoretically, and we believe that it can be can be reproducibly prepared experimentally. Adsorption onto a monolayer of copper also forms the starting point for systematic studies of clusters of atoms on surfaces. Furthermore, interactions of molecules with individual atoms in a copper monolayer makes contact with zeolite systems. To aid in interpretation, we also make a direct comparison between adsorption onto Cu/ZrO<sub>2</sub> and onto the bare Cu(100) surface. ## II METHOD All calculations in this study use density functional theory (DFT) within the plane-wave pseudopotential method to determine all structural parameters and energies. Optimized pseudopotentials for all elements were constructed to be well converged for a 50 Ry plane wave cut-off energy. For structural parameters, the local density approximation (LDA) of DFT gives highly accurate results for most bulk and surface systems. Typically, the optimal computed parameters are within 1% of the experimental values. However, for adsorption energies, the generalized gradient approximation (GGA) typical yields significantly more accurate results. To obtain GGA results for the adsorption energies of copper and carbon monoxide on the substrate we take a hybrid approach. First, the structure is fully relaxed within the LDA (i.e., the atoms are moved until all atomic forces are less than 0.01 eV/Å). Second, the GGA energy of the optimal configuration is obtained from the ground-state charge density of the LDA calculation. This hybrid approach yields significantly more accurate adsorption energies than LDA calculations alone; for most systems, this approach gives results very close to fully self-consistent GGA calculations. For bulk systems, Monkhorst-Pack special $`k`$-point sets were used for Brillouin zone integrations. For the bulk cubic system, calculations using 10 irreducible $`k`$-points gave convergence error of less than 1 meV per unit cell. Similar convergence criteria were used for the other bulk systems. For calculations involving the (111) surface unit cell, we used the Ramírez-Böhm $`k`$-point sampling method and found that convergence was obtained with 3 irreducible $`k`$-points. ## III RESULTS and DISCUSSION ### A Bulk Zirconia The ground-state phase of zirconia—baddeleyite—has a very complex monoclinic structure ($`m`$-ZrO<sub>2</sub>) with nine internal degrees of freedom and four formula units per primitive cell. Each Zr cation is 7-fold coordinated by oxygen in the monoclinic phase. The structure can be viewed as alternating layers of Zr and O with the coordination of the O atoms alternating between 3 and 4 from one oxygen layer to the next. At about 1400 K, a first-order martensitic phase transition occurs, yielding the tetragonal phase ($`t`$-ZrO<sub>2</sub>). The tetragonal form of zirconia can be viewed as a simple distortion of the cubic fluorite structure, with alternating columns of oxygen atoms along one crystallographic axis shifting upward or downward by an amount $`d_z`$. This structure is described by two lattice constants, $`a`$ and $`c`$, and it has two formula units per unit cell. Above about 2650 K, zirconia assumes the cubic fluorite structure ($`c`$-ZrO<sub>2</sub>). In this phase there are only two degrees of freedom: the lattice constant, $`a`$, and an internal coordinate, $`u`$, reflecting the positions of the oxygen atoms along the body diagonal of the cubic cell. For the ideal fluorite structure, the value of this coordinate is 0.25. The cubic structure can be stabilized at room temperature by incorporation of a few percent of Y<sub>2</sub>O<sub>3</sub>. Addition of cation impurities also improves the thermochemical properties substantially, giving the cubic phase extremely high strength and thermal-shock resistance. Table 1 shows the optimized structural parameters for the monoclinic, tetragonal and the cubic phase of bulk zirconia (see Fig. 1). All computed values are in excellent agreement with experiment. ### B Cubic Zirconia (111) Surface Table 2 shows the relaxation data for two (111) surfaces of c-ZrO<sub>2</sub>: the stoichiometric and the fully reduced (i.e., top layer of oxygen removed) surfaces. The stoichiometric surface shows an outward relaxation of the top oxygen-zirconium spacing and a significant reduction in the oxygen-oxygen interlayer spacing between the two outermost layers of ZrO<sub>2</sub> formula units. For the reduced surface, removal of the top layer of oxygen changes these relaxations dramatically. We predict a huge (24.4%) inward relaxation of the top zirconium-oxygen spacing and a reversal in the relaxation of the oxygen-oxygen spacing between the top two layers of formula units. Our calculations show a large energetic cost of 9.4 eV/atom for completely reducing the (111) surface. The possibility of surface reconstructions has not been considered in this study. A detailed study of many zirconia surfaces can be found in reference 16. ### C Copper on c-Zirconia Before investigating the chemisorption of CO on zirconia supported copper, we first determine the preferred binding site for copper on (111) c-ZrO<sub>2</sub>. There are three high-symmetry positions on this surface: the top oxygen site (O1), the three-fold hollow site directly above the zirconium atom (Zr), and the hollow site above the subsurface oxygen atom in the top ZrO<sub>2</sub> formula unit (O2). Table 3 shows the GGA binding energies for a monolayer of copper centered on each of these three sites. As shown in the table, the O1 site is preferred by about 0.3 eV/atom over the other possible sites. It is important to compare the binding energy of copper on the c-ZrO<sub>2</sub> surface with the cohesive energy of bulk copper. Since the cohesive energy is nearly twice the adsorption energy of copper on c-ZrO<sub>2</sub>, the oxide-supported copper monolayer is, at best, a metastable configuration. Annealing this monolayer would result in the formation of copper particles on the oxide surface. ### D CO on Cu on ZrO<sub>2</sub>(111) with comparison to CO on Cu(100) With the preferred binding site of copper determined, the binding energy of carbon monoxide on the oxide-supported monolayer of copper can now be calculated. Table 4 shows the GGA binding energy and the relaxed structural parameters of carbon monoxide on the oxide supported copper monolayer. It is illustrative to compare the binding energy of carbon monoxide on zirconia-supported copper to the binding energy of a half monolayer of carbon monoxide on the Cu(100) surface. As seen in Table 4, the nearest-neighbor distance of CO molecules is very similar for these systems. Our computations reveal that the binding energy of CO on zirconia-supported copper is nearly 0.2 eV greater than that of CO on the Cu (100) surface. This is most likely due to charge transfer from the copper atoms to the zirconia substrate. It has been shown experimentally that the Cu-CO chemisorption bond is dative, resulting from charge transfer from the weakly anti-bonding 5$`\sigma `$ orbital of the CO molecule to the metal conduction bands. This process is therefore enhanced due to the presence of the strongly electronegative oxide substrate. ## IV Conclusions We have computed the bulk parameters of monoclinic, tetragonal and cubic zirconia which are in excellent agreement with experiment. Our surface relaxation data for the stoichiometric (111) c-ZrO<sub>2</sub> surface shows an outward relaxation of the top layer of oxygen. This may be related to the propensity of the ZrO<sub>2</sub> top-site oxygen atoms to participate in surface chemical reactions. Furthermore, we have found that the preferred binding site for copper on the $`c`$-ZrO<sub>2</sub> (111) surface, by 0.3 eV/atom, is atop the surface oxygen atom. We have calculated the adsorption energy of CO on an oxide-supported monolayer of copper. Comparing our results to that of CO on Cu (100), we find that the presence of the support increases the binding energy by over 0.2 eV/molecule. ###### Acknowledgements. This work was supported by the Laboratory for Research on the Structure of Matter as well as NSF grant DMR 97-02514. The authors would also like to thank the Alfred P. Sloan Foundation for support. Computational support was provided by the National Center for Supercomputing Applications and the San Diego Supercomputer Center.
no-problem/9912/astro-ph9912372.html
ar5iv
text
# Instabilities and the Formation of Small-Scale Structures in Planetary Nebulae ## 1. Introduction It has become clear in recent years that the morphology of Planetary Nebulae (PNe) is much more complicated than was previously thought. This revolution has been brought about mainly due to spectacular observations of PNe using the Hubble Space Telescope (see reviews by Bruce Balick, Howard Bond, Raghvendra Sahai and others in this volume), accompanied by images of proto-planetary and young planetary nebulae, especially in infrared bands. The variety of structure visible at small scales, from rings, arcs, and disks to FLIERS, Jets and Ansae, is well described elsewhere in this volume. The global morphology of PNe has generally been attributed to the so-called interacting stellar winds model (Kwok, Purton and Fitzgerald 1978), where a “fast” stellar wind from a post-AGB star interacts with a structured ambient medium, presumably a wind ejected at an earlier epoch. However the variety of filaments, blobs, fast-moving knots and “searchlight beams” seen in the HST observations are not easily explained. High-resolution numerical simulations of planetary nebulae reveal the presence of instabilities that arise during the nebular evolution. These instabilities are due to various flow patterns, which in turn depend on the initial conditions and the hydrodynamic model assumed. It is possible that they may give rise to some of the small-scale structure visible in PNe images. In this paper we study the formation and growth of some of these instabilities. More details can be found in Dwarkadas & Balick (1998a, hereafter DB98). ## 2. Various Types of Models Numerical models of Planetary Nebulae can be considered as falling into these basic categories: (a) Constant Wind Models \- Models where the wind properties are constant throughout the evolution of the nebula. (b) Evolving Wind Models \- The wind properties change with time during the course of evolution of the nebula. (c) Magnetohydrodynamic (MHD) Models \- Magnetic field responsible for the asymmetry. (d) Radiation-Hydrodynamic (RHD) Models \- Incorporating a more realistic treatment of the radiation transfer. In this paper I will discuss predominantly models in the first two categories. MHD models have been very comprehensively reviewed by Guillermo Garcia-Segura (this volume), and RHD models by Adam Frank and Garrelt Mellema (Frank 1999 and references therein). ### 2.1. Constant Wind Models This is the prototypical PN model, where the very fast ($`1000\mathrm{km}\mathrm{s}^1`$), low density wind collides with the slow ($`10\mathrm{km}\mathrm{s}^1`$), dense wind from a previous epoch. Since the wind properties are constant, the nebula will reach a self similar stage in several doubling times of the radius, wherein it begins to expand with a constant velocity and the shape remains time-invariant (see for example Dwarkadas, Chevalier & Blondin 1996, hereafter DCB96). The constant expansion velocity does not allow for the growth of Rayleigh-Taylor instabilities. If an asymmetry exists in the slow ambient wind, with denser material at the equator compared to the poles, the outcome will be an aspherical planetary nebula. A pressure gradient is created in the shocked fast wind due to this asymmetry, with higher pressure at the equator than the poles. This directs a flow along the walls from the equator to the poles (Figure 1). In addition, there is a tangential flow in the opposite direction within the dense shell (DCB96). The resultant shear flow leads to the growth of Kelvin-Helmholtz (K-H) instabilities along the edges, giving the edges a rippled appearance, as is seen in many PNe. The shell appears clumpy, but usually remains contiguous throughout the simulations. The corrugated appearance is primarily due to the asymmetry, and spherically symmetric PNe will not in general be susceptible to this instability. ### 2.2. Evolving Wind Models Both stellar evolution models and observations of PNe seem to indicate an evolution in the wind properties over time. Despite significant progress in these areas (see articles by Schonberner and Wood) we still do not possess a good theoretical prescription for the characteristics of the mass loss, telling us how the wind properties change with time, and the relevant timescales. Here I focus on results of simulations which were performed assuming that the wind velocity increases as a power law with time. The wind mass loss rate decreases correspondingly, such that the mass flux (or momentum flux) remains constant. The resultant instabilities do not depend strongly on the manner in which the wind properties vary, and we concentrate more on the general conditions that lead to the growth of the instability and in what stage of a PN’s development they are likely to arise. If the velocity and mass loss rate of the so-called “fast wind” are slowly increasing, the nebula will go through an initial momentum-conserving or radiative stage. A radiative inner shock means there is no formation of a hot bubble, and the inner and outer shocks lie very close to each other (DB98), enclosing a thin shell of swept up material. A shear flow along the inner walls of the asymmetrical nebula gives rise to K-H instabilities. If the shell is thin these instabilities may grow to a size comparable to the shell thickness. In such a scenario the perturbations may continue to grow, giving rise to the non-linear thin shell instability (NTSI, Figure 2a). This arises when a thin shell, bounded between two sharp discontinuities, is perturbed by an amount larger than the thickness of the shell (Vishniac 1994). The perturbations tend to grow non-linearly, forming conical projections that are characterised by shear flow within the unstable fragments (Blondin & Marks 1998). As the fast wind velocity increases and mass loss rate decreases, the time taken to radiate away the post-shock energy begins to exceed the flow rate. The inner shock ceases to be radiative, and a hot bubble of shocked fast wind separates it from the shocked ambient gas. As long as the velocity continues to increase, the high-pressure, low density bubble accelerates the low-pressure, high density swept-up material. The opposing density and pressure gradients lead to the growth of the Rayleigh-Taylor (R-T) instability, and wispy filaments can be seen growing inwards from the edges of the nebula (Figure 2b). These filaments, with bulbous shaped heads caused by the K-H instability as gas within the nebula flows around them, will continue to form as long as the wind velocity is evolving. Simultaneously, there still persists the shear flow along the inner wall of the nebula. This flow results in some of the filamentary material being constantly stripped off and mixed into the interior, leading to cold, dense and slow moving material being mixed in with the high-temperature gas. The resultant flow within the hot bubble is characterised by the formation of vortices and the onset of turbulence. Herein I have given a flavour for some of the instabilities that may occur. The filamentary appearance of many nebulae, such as Hubble 5 and NGC 6537, may be due to the R-T instabilities. R-T instabilities may also occur at ionization fronts (O’Dell & Burkert 1997). Many models of asymmetrical planetary nebulae require the presence of a dense torus of material surrounding the central star. The interaction of the stellar wind with this torus may lead to the formation of thin shell instabilities (see Dwarkadas & Balick 1998b). Other instabilities may be seen in RHD and MHD simulations. The next step will be to produce simulated images from the numerical simulations to enable a proper comparison with the observations. #### Acknowledgments. I would like to thank the organisers for hosting a most stimulating and timely conference. Travel support from the Science Foundation for Physics at the University of Sydney is gratefully acknowledged. ## References Blondin, J. M., & Marks, B. S. 1996, New Astron., 1, 235 Dwarkadas, V. V., Chevalier, R. A., & Blondin, J. 1996, ApJ, 457, 773 \[DCB96\] Dwarkadas, V. V., & Balick, B. 1998a, ApJ, 497, 267 \[DB98\] Dwarkadas, V. V., & Balick, B. 1998b, AJ, 116, 829 Frank, A. 1999, New Astron Revs, 43, 31 Kwok, S., Purton, C. R., & Fitzgerald, P. M. 1978, ApJ, 219, L125 O’Dell, C. R., & Burkert, A. 1997, in IAU Symposium 180, pg 332 Vishniac, E. T. 1994, ApJ, 428, 186
no-problem/9912/astro-ph9912559.html
ar5iv
text
# New PAH mode at 16.4 𝜇m based on observations with ISO, an ESA project with instruments funded by ESA Member States (especially the PI countries: the United Kingdom, France, the Netherlands, Germany), and with the participation of ISAS and NASA. ## 1 Introduction The launch of ISO (Infrared Space Observatory, Kessler et al. 1996) has vastly improved our ability to explore the spectral range beyond 13 $`\mu `$m, which is blocked by atmospheric absorption. Airborne or spaceborne mid-infrared spectrometers prior to ISO did not have the sensitivity to determine whether the carriers of the Aromatic Infrared Bands (AIBs), observed in the interstellar medium (ISM) between 3 and 13 $`\mu `$m, also emit at longer wavelengths. The lack of interstellar data on possible AIBs at 13 – 25 $`\mu `$m meant that laboratory spectroscopy of interstellar analogs in this spectral range was scanty. The hypothesis that polycyclic aromatic hydrocarbons (PAHs) are responsible for the AIBs (Léger and Puget 1984, Allamandola et al. 1985), has led much laboratory work to focus on PAHs. Spectra longward of 13 $`\mu `$m were initially published for a limited sample of small PAH molecules (Léger et al. 1989a; Karcher et al. 1985; Allamandola et al. 1989). Recently a more extensive laboratory study (Moutou et al., 1996) has shown that PAH spectra contain common vibrational frequencies, namely near 16.2, 18.2, 21.3 and 23.2 $`\mu `$m (617, 550, 470 and 430 cm<sup>-1</sup>). These frequencies where many species exhibit an absorption mode should correspond to modes with a higher probability of detection in the ISM. The IR spectra of many neutral and ionized PAHs isolated in Argon matrices have also been recently reported (Hudgins and Sandford 1998a,b,c). Small particles such as PAHs in the ISM are believed to emit when they are heated to high temperatures by transient heating (Sellgren 1984), while laboratory absorption spectra are measured at lower temperature. This results in band broadenings and wavelength shifts of the interstellar emission features compared to the laboratory absorption features. The central wavelengths of PAH vibrational bands are observed to increase with temperature, and anharmonicity as well as mode couplings contribute to the broadening (Joblin et al. 1995). We have used the results of Joblin et al. (1995) to correct the measured central wavelengths of features under laboratory (absorption at lower temperature) conditions to the expected wavelengths under astronomical (emission at higher temperature) conditions. In this paper, we compare the laboratory measurements of PAHs in the 14 – 25 $`\mu `$m domain to astronomical spectra obtained by ISO. ## 2 Observations High-resolution Short Wavelength Spectrometer (SWS) spectra (de Graauw et al. 1996) of NGC 7023, M17, and the Orion Bar were obtained with ISO in a 14<sup>′′</sup>$`\times `$27<sup>′′</sup> beam. The data reduction was carried out with SWS-IA (update: October 1997). Bands 3A (12.0 – 16.5 $`\mu `$m) and 3C (16.5 – 19.5 $`\mu `$m) are subject to fringing (Schaeidt et al. 1996). Fringes were removed by subtracting a sine function, whose amplitude and phase were determined by least-squares fitting, from the high-frequency part of each detector scan. The order edge between bands 3A and 3C occurs at 16.5 $`\mu `$m, and is visible in Figure 1. A detailed description of the data reduction scheme is given in Moutou et al. (1999). ### 2.1 NGC 7023 We observed the bright reflection nebula NGC 7023 at a position 27<sup>′′</sup> W 34<sup>′′</sup> N of its illuminating star HD 200775. The SWS01 spectrum (2.5 – 45 $`\mu `$m) has a spectral resolution of $`R`$ = $`\lambda /\mathrm{\Delta }\lambda `$ = 900 (Sellgren et al. in preparation). Another spectrum with SWS-AOT6, at $`R`$ = 1800, provided an independent spectrum in the reduced domain 14.5 – 19.5 $`\mu `$m on which we focus here. The continuum level in this object is quite low and almost flat between 13 and 25 $`\mu `$m, which offers the best conditions for searching for new features. Comparison of spectra taken at different times is a reliable test for distinguishing noise features from real emission bands. Both spectra are displayed in Fig. 1. The SWS slit orientation differed between these spectra, which explains the varying intensity of the 17.0 $`\mu `$m H<sub>2</sub> line. The spectra contain an emission feature at 16.4 $`\mu `$m, detected in all independent scans. This new emission feature was first mentioned by Boulanger et al. (1998) and Tielens et al. (1999). We have fit the 16.4 $`\mu `$m band in our SWS data by a Lorentzian profile (Fig. 2). Lorentzian fitting of AIB spectra has proven to be a useful tool for extracting the band contribution from the underlying continuum (Boulanger et al. 1998). ### 2.2 Detection in other objects As shown in Fig. 3, the 16.4 $`\mu `$m band is also detected in the ISO-SWS01 ($`R`$ = 900) spectra of the Orion Bar ($`\alpha _{2000}=5:35:20.3`$, $`\delta _{2000}=5:25:20`$), and in M17-SW, at the interface between the HII region and the molecular cloud ($`\alpha _{2000}=18:20:22.1`$, $`\delta _{2000}=16:12:41.3`$). More details on these spectra are given in Verstraete et al. (in preparation). Individual values of the band parameters are listed in Table 1. The average central wavelength of the feature is 16.42 $`\mu `$m (608.9 cm<sup>-1</sup>) and the average band width is 0.16 $`\mu `$m (5.8 cm<sup>-1</sup>). The uncertainty in the intensity is based on the difference between independent scans and does not include the overall calibration uncertainty of SWS. Some variation in the band-to-continuum ratio at 16.4 $`\mu `$m is observed, likely due to the stronger radiation field in M17-SW and the Orion Bar. This will be discussed further in a forthcoming paper. ## 3 Laboratory spectra The laboratory spectra of many PAHs contain a frequently observed mode peaking at 16.2 $`\mu `$m (Moutou et al. 1996; Schmidt et al., not published; Hudgins & Sandford 1998a,b,c). This mode appears slightly shifted from one molecule to the other, because its frequency depends on the internal structure of the molecular skeleton. In total, 29 molecules out of 63 show a band in between 611 and 623 cm<sup>-1</sup> (16.21 – 16.52 $`\mu `$m). The central wavelength of a PAH vibrational mode should appear redshifted in interstellar spectra, with respect to the position measured in absorption at cooler temperatures (Joblin et al. 1995). The averaged redshift observed for coronene over $``$300 Kelvins is approximately 1% of the vibration wavenumber. This would move the measured wavelength of the laboratory 16.2 $`\mu `$m band to the observed wavelength of the interstellar 16.4 $`\mu `$m band. Fig. 4 shows the measured histograms of the laboratory absorption modes, obtained by counting the modes, in 5 cm<sup>-1</sup> bins (or 0.13 $`\mu `$m bins). It has been done separately for the seven PAH families previously defined in the far-infrared data base of PAH spectra (Moutou et al. 1996). This number is a percentage, normalised to the total number of species in the corresponding family. Each family contains generally 5 to 9 individual species. The ”PAHs with pentagons” family has 19 components and the ”chain PAHs” contains 6 additional species taken from the work of Hudgins and Sandford (1998a,b), leading to a total number of 11 ”chain” species. The 0.16 $`\mu `$m redshift is applied to the laboratory data. We deliberately did not take into account in this statistical approach the relative strength of the modes and it is thus not similar to a spectrum, because of the difficulty of combining spectra from three different laboratory groups. However, the average spectrum of each PAH family measured by Moutou et al. 1996 is displayed in their Figure 1, and these average laboratory spectra can be directly compared to the interstellar spectra after applying a 0.16 $`\mu `$m redshift to the laboratory data. We find that the 16.4 $`\mu `$m band is especially active in the spectra of PAHs containing pentagonal rings and of linear PAHs. In this latest category, the band appears more likely at 16.25$`\mu `$m and is dominated by the spectra of small chains containing 2 to 5 rings. These molecules are not thought to be good candidates as IR emitters, as they will probably not survive the interstellar radiation fields (Omont, 1986). Comparatively, the distribution of modes for other kinds of PAHs in this spectral domain is more spread out in wavelength or has another accumulation point. The contribution of the PAHs containing pentagons could then possibly dominate the interstellar emission spectra we observe. ## 4 Discussion Calculations of infrared spectra are rare for large molecules. In our sample of PAHs with pentagons, only the theoretical spectrum of fluoranthene (C<sub>16</sub>H<sub>10</sub>) has been calculated (Klaeboe et al. 1981). The mode is observed at 616 cm<sup>-1</sup> in the laboratory and predicted by simulations to lie at 668 cm<sup>-1</sup>; it is identified as a B2 in-plane vibration of C–C bonds. It is a complex vibration of the global ring structure, which could be described as a tentative “rotation” of the central pentagonal ring (J. Brunvoll, private comm.) and consecutive movement of all carbon atoms. In other molecules, it may also correspond to a global vibration of the carbon skeleton. Since the mode is also present in PAHs without pentagons, it is not clear yet if the five-membered rings have a special role in the IR activity at 16.4$`\mu `$m. We investigate in any case the implications of the presence of pentagonal rings in PAHs: * Pentagonal rings inside a PAH molecule are known to produce a strong feature around 7 $`\mu `$m (Moutou et al. 1996, Hudgins & Sandford 1998c). No individual feature is detected at 7 $`\mu `$m in NGC 7023, but we can place an upper limit on any 7 $`\mu `$m band from pentagonal rings from the 7.0 $`\mu `$m continuum level (Moutou et al. 1999). We adopt the laboratory width of 0.14 $`\mu `$m for a pentagonal 7 $`\mu `$m band. We can then estimate an upper limit on the ratio of flux in the 7 $`\mu `$m band to flux in the 16.4 $`\mu `$m band of $`<`$10 in NGC 7023. This upper limit on the flux ratio is not in conflict with the measured laboratory value (2.5). We conclude that the non-detection of a 7 $`\mu `$m feature would not rule out a possible signature of pentagons in the ISM at 16.4 $`\mu `$m . * In laboratory spectra where a 16.2 $`\mu `$m mode is observed, other modes may be present, but at positions that vary a lot from one molecule to another (Moutou et al. 1996, and Fig. 4). These weak features are therefore under the detectability limit in our spectra. * The fullerene molecule C<sub>60</sub>, which contains pentagonal rings, and its related cation C$`{}_{}{}^{+}{}_{60}{}^{}`$ are not detected in NGC 7023 (Moutou et al. 1999). However, the C<sub>60</sub> infrared spectrum does not show the 16.4 $`\mu `$m band (Krätschmer et al. 1990), for symmetry reasons. The low abundance of 60-atom fullerenes is therefore not evidence against pentagonal rings in the ISM. * In the evolution of aromatic molecules, pentagonal rings tend to curve a molecular plane composed of pure hexagonal PAHs. The molecules will then become tri-dimensional while growing. A global scheme of aromatic compounds, from simple planar PAHs to small carbon grains or anthracite coals such as have been shown to fit the spectra of some planetary nebulae (Guillois et al. 1996) requires such an intermediate evolution state, where molecules of $``$ 100 carbon atoms are curved. It is also consistent with proposals for grain formation in carbon star shells. For instance, Goeres & Seldmayr (1992), with their self-consistent model DRACON of carbon chemistry, predict the formation of large aromatic molecules containing pentagons, mostly from C<sub>3</sub> accretion, because pentagons offer the best trade-off between entropy and energy in the process of carbon dust growth. Also, Kroto & McKay (1988) proposed an alternative growth sequence to produce carbon grains, involving pentagonal rings. Their presence in the network would lead to “quasi-icosahedral carbon particles with spiral structures” (see also Balm & Kroto, 1990). We have shown that the 16.4 $`\mu `$m mode observed in interstellar spectra could be explained by PAH vibrational activity. More work is required to assess the possible dominant role of five-membered ring PAHs. Other mid-infrared features should also be searched for in other ISO spectra, as the identification of other far-infrared features could give some important hints as to the nature of the AIB spectrum. Acknowledgments: We are grateful to J. Brunvoll, S.J. Cyvin and P. Klaeboe for very helpful discussions. Thanks to the anonymous referee for useful comments.
no-problem/9912/cond-mat9912123.html
ar5iv
text
# (Invited) Landau Zener method to study quantum phase interference of Fe8 molecular nanomagnets ## I Introduction Magnetic molecular clusters are the final point in the series of smaller and smaller magnets from bulk matter to atoms. Up to now, they have been the most promising candidates to observe quantum phenomena since they have a well defined structure with well characterized spin ground state and magnetic anisotropy. These molecules are regularly assembled in large crystals where often all molecules have the same orientation. Hence, macroscopic measurements can give direct access to single molecule properties. The most prominent examples are a dodecanuclear mixed-valence manganese-oxo cluster with acetate ligands, Mn<sub>12</sub> acetate , and an octanuclear iron(III) oxo- hydroxo cluster of formula \[Fe<sub>8</sub>O<sub>2</sub>(OH)<sub>12</sub>(tacn)<sub>6</sub>\]<sup>8+</sup>, Fe<sub>8</sub> , where tacn is a macrocyclic ligand. Both systems have a spin ground state of $`S=10`$, and an Ising-type magneto-crystalline anisotropy, which stabilizes the spin states with $`M=\pm 10`$ and generates an energy barrier for the reversal of the magnetization of about 67 K for Mn<sub>12</sub> acetate and 25 K for Fe<sub>8</sub>. Fe<sub>8</sub> is particular interesting because its magnetic relaxation time becomes temperature independent below 0.36 K showing for the first time that a pure tunneling mechanism between the only populated $`M=\pm 10`$ states is responsible for the relaxation of the magnetization . Measurements of the tunnel splitting $`\mathrm{\Delta }`$ as a function of a field applied in direction of the hard anisotropy axis showed oscillations of $`\mathrm{\Delta }`$, i.e. oscillations of the tunnel rate . In a semi-classical description , these oscillations are due to constructive or destructive interference of quantum spin phases of two tunnel paths . Furthermore, parity effects were observed when comparing the transitions between different energy levels of the system which are analogous to the parity effect between systems with half integer or integer spins . An alternative explication in terms of intermediate spin was also presented . Hence, molecular chemistry had a large impact in the research of quantum tunneling of magnetization at molecular scales. ## II Micro-SQUID technique The technique of micro-SQUIDs is very similar to the traditional SQUID technique. The main difference is that the pick-up coil is replaced by a direct coupling of the sample with the SQUID loop (Fig. 1). When a small sample is directly placed on the wire of the SQUID loop, the sensitivity of the micro-SQUID technique is ten orders of magnitude better than a traditional SQUID reaching $`10^{17}`$ emu. This sensitivity is smaller when the sample is much bigger than the micro-SQUID. Our new magnetometer is a chip with an array of micro-SQUIDs. The sample is placed on top of the chip so that some SQUIDs are directly under the sample, some SQUIDs are at the border of the sample and some SQUIDs are beside the sample (Fig. 1). When a SQUID is very close to the sample, it is sensing locally the magnetization reversal whereas when the SQUID is far, it is integrating over a bigger sample volume. The high sensitivity of this magnetometer allows us to study single Fe<sub>8</sub> crystals of the order of 10 to 500 $`\mu `$m. The magnetometer works in the temperature range between 0.035 and 6 K and in fields up to 1.4 T with sweeping rates as high as 1 T/s, and a field stability better than a microtesla. The time resolution is about 1 ms allowing short-time measurements. The field can be applied in any direction of the micro-SQUID plane with a precision much better than 0.1 by separately driving three orthogonal coils . In order to ensure a good thermalisation, the crystal is fixed by using a mixture of araldite and silver powder. Typical measurements of magnetic hysteresis curves for a crystal of molecular Fe<sub>8</sub> clusters are displayed in Fig. 2 and 3. The field was swept in direction of the easy axis of magnetization evidencing about equally separated steps at $`H_zn\times 0.22`$ T ($`n=`$ 1, 2, 3 …) which are due to a faster relaxation of magnetization at particular field values. The step heights (i.e. the relaxation rates) change when a constant transverse field is applied. It is the purpose of these article to present a detailed study of this behavior which is interpreted in terms of resonant tunneling between discrete energy levels of the spin Hamiltonian $`S=10`$ of Fe<sub>8</sub>. ## III Landau Zener method The simplest model describing the spin system of Fe<sub>8</sub> molecular clusters has the following Hamiltonian : $$H=DS_z^2+E\left(S_x^2S_y^2\right)+g\mu _B\stackrel{}{S}\stackrel{}{H}$$ (1) $`S_x`$, $`S_y`$, and $`S_z`$ are the three components of the spin operator, $`D`$ and $`E`$ are the anisotropy constants, and the last term of the Hamiltonian describes the Zeeman energy associated with an applied field $`H`$. This Hamiltonian defines a hard, medium, and easy axes of magnetization in $`x`$, $`y`$ and $`z`$ direction, respectively (Fig. 4). It has an energy level spectrum with $`(2S+1)=21`$ values which, in first approximation, can be labeled by the quantum numbers $`M=10,9,\mathrm{}10`$. The energy spectrum, shown in Fig. 5, can be obtained by using standard diagonalisation techniques of the $`[21\times 21]`$ matrix describing the spin Hamiltonian $`S=10`$. In the low temperature limit ($`T<`$ 0.36 K) only the two lowest energy levels with $`M=\pm 10`$ are occupied. The avoided level crossing around $`H_z`$ = 0 is due to transverse terms containing $`S_x`$ or $`S_y`$ spin operators (see inset of Fig. 7). The spin $`S`$ is ’in resonance’ between two states when the local longitudinal field is close to the avoided level crossing ($`<10^8`$ T for the avoided level crossing around $`H_z`$ = 0). The energy gap, the so-called tunnel spitting $`\mathrm{\Delta }`$, can be tuned by an applied field in the $`xy`$–plane (Fig. 4) via the $`S_xH_x`$ and $`S_yH_y`$ Zeeman terms. It turns out that a field in $`H_x`$ direction (hard anisotropy direction) can periodically change the tunnel spitting $`\mathrm{\Delta }`$. In a semi-classical description, these oscillations are due to constructive or destructive interference of quantum spin phases of two tunnel paths (Fig. 4). The period of oscillation is given by : $$\mathrm{\Delta }H=\frac{2k_B}{g\mu _B}\sqrt{2E(E+D)}$$ (2) The most direct way of measuring the tunnel splitting $`\mathrm{\Delta }`$ is by using the Landau- Zener model which gives the tunneling probability $`P`$ when sweeping the longitudinal field $`H_z`$ at a constant rate over the avoided energy level crossing (see inset of Fig. 5): $$P_{M,M^{}}=1e^{\frac{\pi \mathrm{\Delta }_{M,M^{}}^2}{2\mathrm{}g\mu _B|MM^{}|dH/dt}}$$ (3) Here, $`M`$ and $`M^{}`$ are the quantum numbers of the avoided level crossing, $`dH/dt`$ is the constant field sweeping rates, $`g2`$, $`\mu _B`$ the Bohr magneton, and $`\mathrm{}`$ is Planck’s constant. In the following, we drop the index $`M`$ and $`M^{}`$. For very small tunneling probabilities $`P`$, we did multiple sweeps of the resonance transition. The relaxed magnetization after $`N`$ sweeps is given by ($`n=0`$): $$M(N)exp\left[2PN\right]=exp\left[\mathrm{\Gamma }t\right]$$ (4) where $`N=\frac{1}{A}\frac{dH}{dt}t`$ is the number of sweeps over the level crossing, $`\mathrm{\Gamma }=2P\frac{1}{A}\frac{dH}{dt}=\frac{\mathrm{\Delta }M}{M_s}\frac{1}{A}\frac{dH}{dt}`$ is the overall Landau-Zener transition rate, $`\mathrm{\Delta }M`$ is the change of magnetization after one sweep, and $`A`$ is the amplitude of the ac- field . We have therefore a simple tool to obtain the tunnel splitting by measuring $`P`$, or $`M(N)`$ for $`P<<1`$. In order to apply the Landau-Zener formula (Eq. 3), we first saturated the sample in a field of $`H_z`$ = -1.4 T, yielding $`M_{\mathrm{in}}=M_s`$ . Then, we swept the applied field at a constant rate over one of the resonance transitions and measured the fraction of molecules which reversed their spin. This procedure yields the tunneling rate $`P`$ and thus the tunnel splitting $`\mathrm{\Delta }`$ (Eq. 3). We first checked the predicted Landau-Zener sweeping field dependence of the tunneling rate. This can be done by plotting the relaxation of magnetization as a function of $`t=N\frac{A}{dH/dt}`$. The Landau- Zener model predicts that all measurements should fall on one line which was indeed the case for sweeping rates between 1 and 0.001 T/s (fig. 6). The deviations at lower sweeping rates, are mainly due to the ’hole-digging mechanism’ which slows down the relaxation . In the ideal case, we should find an exponential curve (Eq. 4). However, this might be only the case in the long time regime (see inset of Fig. 6. The origin is not clear but it might be related to dipolar interactions and hyperfine couplings. For comparison, there is also a relaxation curve without ac-field at $`H=0`$ which shows much slow relaxation . We also compared the tunneling rates found by the Landau-Zener method with those found using a square-root decay method which was proposed by Prokof’ev and Stamp , and found again a good agreement . These measurements show that the Landau-Zener method is particularly adapted for molecular clusters because it works even in the presence of dipolar and hyperfine fields which spread the resonance transition provided that the field sweeping rate is not too small . ## IV Oscillations of tunnel splitting Studies of the tunnel splitting $`\mathrm{\Delta }`$, at the tunnel transition between $`M=\pm 10`$, as a function of transverse fields applied at different angles $`\phi `$, defined as the azimuth angle between the anisotropy hard axis and the transverse field (Fig. 4) show that for small $`\phi `$ angles the tunneling rate oscillates with a period between minima of ca. 0.41 T, whereas no oscillations showed up for large $`\phi `$ angles (see Fig. 2A in Ref. ). In the latter case, a much stronger increase of $`\mathrm{\Delta }`$ with transverse field is observed. The transverse field dependence of the tunneling rate for different resonance conditions between the state $`M=10`$ and $`(Sn)`$ can be observed by sweeping the longitudinal field around $`H_z=n\times 0.22`$ T with n = 0, 1, 2, … The corresponding tunnel splittings $`\mathrm{\Delta }`$ oscillate with almost the same period of ca. 0.4 T (Fig. 8). In addition, comparing quantum transitions between $`M=S`$ and $`(Sn)`$, with $`n`$ even or odd, revealed a parity (symmetry) effect which is analogous to the (Kramers) suppression of tunneling predicted for half integer spins . This behavior has been observed for $`n=0`$ to 4 . A similar strong dependence on the azimuth angle $`\phi `$ was observed for all the resonances. In the frame of the simple giant spin model (Eq. 1), the period of oscillation (Eq. 2) is $`\mathrm{\Delta }H`$ = 0.26 T for $`D`$ = 0.275 K and $`E`$ = 0.046 K as in Ref. . This is significantly smaller than the experimental value of ca. 0.4 T. In order to quantitatively reproduce the observed periodicity we have included forth order terms in the spin Hamiltonian (Eq.1) as recently employed in the simulation of inelastic neutron scattering measurements and performed a diagonalization of the $`[21\times 21]`$ matrix describing the S = 10 system. However, as the forth order terms are very small, only the term in $`C(S_+^4+S_{}^4)`$, which is the most efficient in affecting the tunnel splitting $`\mathrm{\Delta }`$, has been considered for the sake of simplicity. The calculated tunnel matrix elements for the states involved in the tunneling process at the resonances $`n`$ = 0, 1, and 2 are reported in Fig. 9, showing the oscillations as well as the parity effect for odd resonances. The period is reproduced by using $`D`$ = 0.292 K $`E`$ = 0.046 K as in Ref. but with a different $`C`$ value of $`2.9\times 10^5`$ K. The calculated tunneling splitting is however ca. 3 times smaller than the observed one . These small discrepancies are not surprising. In fact with the $`C`$ parameter we take into account the effects of the neglected higher order terms in $`S_x`$ and $`S_y`$ of the spin hamiltonian which, even if very small, can give an important contribution to the period of oscillation and dramatically affect $`\mathrm{\Delta }`$, as first pointed out by Prokof’ev and Stamp . Our choice of the forth order terms suppress the oscillations of $`\mathrm{\Delta }`$ for $`|Hx|>1.4`$ T which could not be studied in the current set-up. Future measurements should focus on the higher field region in order to find a better effective Hamiltonian. ## V Intermolecular dipole interaction Fig. 10 shows detailed measurement of the tunnel splitting $`\mathrm{\Delta }`$ around a topological quench for the quantum transition between $`M=\pm 10`$, and $`M=10`$ and $`9`$. Particular effort were made to align well the transverse field in direction of the hard axis. The initial magnetizations $`0M_{\mathrm{in}}M_s`$ were prepared by rapidly quenching the sample from 2 K in the present of an longitudinal applied field $`H_z`$. The quench takes approximately one second and thus the sample does not have time to relax, either by thermal activation or by quantum transitions, so that the high temperature “thermal equilibrium” spin distribution is effectively frozen in. For $`H_z>`$ 1 T, one gets an almost saturated magnetization state. The measurements of $`\mathrm{\Delta }(M_{in})`$ show a strong dependence of the minimal tunnel spittings on the initial magnetization (Fig. 10) which demonstrates the transverse dipolar interaction between Fe<sub>8</sub> molecular clusters being largest of $`M_{\mathrm{in}}=0`$ similar to the longitudinal dipolar interaction . ## VI Conclusion Our measurement technique is opening a way of directly measuring very small tunnel splittings of the order of $`10^8`$ K not accessible by resonance techniques. We have found a very clear oscillation in the tunnel splittings $`\mathrm{\Delta }`$, which are direct evidence of the role of the topological spin phase in the spin dynamics of these molecules. They are also the first observation, to our knowledge, of an ”Aharonov-Bohm” type of oscillation in a magnetic system, analogous to the oscillations as a function of external flux in a SQUID ring. A great deal of information is contained in these oscillations, both about the form of the molecular spin Hamiltonian, and also about the dephasing effect of the environment. We expect that these oscillations should thus become a very useful tool for studying systems of quantum nanomagnets. ## VII Acknowledgment D. Rovai, and C. Sangregorio are acknowledged for help by sample preparation. We are indebted to P. Stamp, I. Tupitsyn, N. Prokof’ev and J. Villain for many fruitful and motivating discussions. We thank R. Ballou, A.-L. Barra, B. Barbara, A. Benoit, E. Bonet Orozco, I. Chiorescu, P. Pannetier, C. Paulsen, C. Thirion, , and V. Villar .
no-problem/9912/hep-ph9912512.html
ar5iv
text
# Unitarized pion-nucleon scattering within Heavy Baryon Chiral Perturbation Theory ### A The IAM applied to $`\pi `$-N scattering. Unitarization is not foreign to effective theories. In fact, Padé approximants with very simple models are enough to describe the main features of $`\pi `$-N scattering . Although a systematic application within an effective Lagrangian approach was called for, it was never carried out. Customarily , the data are presented in terms of partial waves of definite isospin $`I`$, orbital angular momentum $`L`$ and total angular momentum $`J`$, using the spectroscopic notation $`L_{2I+1,2J+1}`$ (with $`L=S,P`$,…waves). Generically, the HBChPT $`\pi `$-N partial waves, $`t`$, are obtained as a series in the momentum transfer and meson masses. Thus, basically, they are polynomials in the energy and mass variables (as well as logarithms from the loops, which provide the cuts and imaginary parts required by unitarity). Such an expansion will never satisfy the $`\pi `$-N elastic unitarity condition $$\text{Im}t=q_{cm}|t|^2\text{Im}\frac{1}{t}=q_{cm}\frac{1}{t}=\text{Re}\frac{1}{t}iq_{cm},$$ (1) with $`q_{cm}`$ the center of mass momentum of the incoming pion. But HBChPT satisfies unitarity perturbatively, i.e.: $`\text{Im}t_1=\text{Im}t_2=0;\text{Im}t_3=q_{cm}|t_1|^2,\mathrm{},`$ (2) where $`t_k`$ stands for the $`𝒪(q^k)`$ contribution to the amplitude. This is indeed the case in , but not in , where an additional redefinition of the nucleon field allows to eliminate the $`(v)^2/2m`$ terms in the Lagrangian . We have performed an additional $`1/m`$ expansion of the results in in order to recover a pure expansion satisfying eq.(2). Thus, we count $`q_{cm}`$ and $`M`$ as $`𝒪(ϵ)`$, so that each partial wave reads $`tt_1+t_2+t_3+𝒪(ϵ^4)`$, where the subscript stands for the order $`ϵ`$ of each contribution. We have then checked that eq.(2) is verified. However, from eq.(1) any unitarity elastic amplitude has exactly the following form: $$t=\frac{1}{\text{Re}(1/t)iq_{cm}},$$ (3) *for physical values of the energy, and below any inelastic threshold*. The problem, of course, is how to obtain Re(1/t). For instance, setting $`\text{Re}(1/t)=|q_{cm}|\text{cot}\delta =\frac{1}{a}+\frac{r_0}{2}q_{cm}^2`$ we reobtain the familiar effective range approximation, whereas by taking $`\text{Re}tt_1`$ we arrive at a Lippmann-Schwinger like equation . A frequent criticism to unitarization is its apparent arbitrariness, although from eq.(3) we see that the difference between two unitarization methods is the way of approximating Re$`(1/t)`$. Since we want to restrict our Lagrangian to include just pions and nucleons preserving the $`SU(2)`$ chiral symmetry, the most general approximation to Re(1/t) is HBChPT. In particular, we will take the $`𝒪(q^3)`$ calculations, but the method can be easily generalized to higher orders. Thus, we arrive at $$t\frac{t_1^2}{t_1t_2+t_2^2/t_1\text{Re}t_3iq_{cm}t_1^2},$$ (4) where we have only kept the relevant order in Re$`[(t_1+t_2+t_3)^1]`$. This is the $`𝒪(q^3)`$ form of the IAM. Note that if we reexpand in powers of $`q`$, we recover at low energies the HBChPT result. However, as it is written, the amplitude explicitly satisfies elastic unitarity. Furthermore, using eq.(2), we can rewrite $`\text{Re}t_3+iq_{cm}t_1^2=t_3`$, which can be analytically continued to the complex plane (where, for instance, we will look for the pole associated to the $`\mathrm{\Delta }(1232)`$). Incidentally, eq.(4) thus rewritten is a Padé approximant of the $`𝒪(q^3)`$ series. From the K matrix point of view, we have identified $`K^1=\text{Re}t^1`$. Even though the elastic unitarity condition is only satisfied for real values of $`s`$ above threshold, the use of the IAM in the complex plane can be justified using dispersion relations , provided one is not very far from the physical cut. In other regions (around the left cut for instance) the IAM would be inappropriate. As a consequence, it is also possible to reproduce the poles in the second Riemann sheet, which are close to the physical cut and are associated to resonances. As a matter of fact the IAM has been successfully applied to meson-meson scattering . In particular, using the $`𝒪(p^4)`$ ChPT Lagrangian, the IAM generalized to coupled channels yields a remarkable description of all channels up to 1.2 GeV, including seven resonances . Using the Lippmann-Schwinger like equation mentioned above it is also possible to describe the S-wave kaon-nucleon scattering, using the lowest order Lagrangian , including the $`\mathrm{\Lambda }(1405)`$. Let us then use eq.(4) when $`t`$ are the $`L_{2I+1,2J+1}`$ $`\pi `$-N partial waves. The resulting amplitudes will be fitted to the phase shifts, which are actually an extrapolation, not including the experimental errors. For the fit we have used the MINUIT Function Minimization and Error Analysis routine from the CERN program Library. As it is customarily done in the literature, we will assign an error to the data in . For instance, in ref. the central points have been given a 3% uncertainty. However, since our fits will cover wide energy ranges, the use of a constant relative error will give more weight to the low energy data. Thus we have also added an additional systematic error of 1 degree. ( A 5% error plus a $`\sqrt{2}`$ systematic error was used in .) This error is needed to use the minimization routine, and, although the order of magnitude may seem appropriate, the values are rather arbitrary, so that the meaning of the $`\chi ^2/d.o.f.`$ obtained from MINUIT has to be interpreted cautiously. Furthermore, the data near threshold are subject to many uncertainties, so that, also following , we will start our fits at $`\sqrt{s}=`$1130 MeV. Hence the threshold parameters are real predictions in our approach. In addition, we should limit the approach to energies where inelasticities can be neglected. In particular, we will not use our $`P_{11}`$, $`P_{13}`$ and $`P_{31}`$ phase shifts above the $`\pi \pi N`$ threshold ($`\sqrt{s}`$ 1220 MeV), since they are very small and inelasticities could be significant. The $`S_{31}`$ and $`S_{11}`$ phase shifts are larger and we fitted them up to $``$ 1360 MeV. The inelasticities in $`P_{33}`$ are negligible up to 1400 MeV since this channel is dominated by the $`\mathrm{\Delta }(1232)`$ which is strongly coupled to $`\pi `$-N. #### a The IAM and Resonance Saturation. Following the suggestion that the $`O(p^2)`$ parameters can be understood from resonance saturation , it is natural to try to make an IAM fit constrained with this hypothesis. Thus, we first fix the $`a_i`$ to the values of , which are compatible with the saturation hypothesis. The resulting $`O(p^3)`$ parameters are given in Table I. In addition, we give in Table I the values for a second fit where we have allowed the $`a_i`$ parameters to vary within the ranges expected from resonance saturation. The results of fit 2 are plotted, as a solid line, in Figure 1. We used “hatted” quantities, $`\widehat{b}_i`$, because their values do not necessarily correspond to those of HBChPT since now they are also absorbing the IAM resummation effects and some high energy information. Only if there was a very good convergence of the theory at low energies the values of $`b_i`$ should be similar to the $`\widehat{b}_i`$ (as it happens in ChPT). Therefore, at present, *our $`\widehat{b}_i`$ should not be used to calculate any other process at low-energies*. Not surprisingly, there are strong correlations between parameters. Unfortunately, from MINUIT we cannot get the actual form of the correlation. However, looking only at linear combinations with integer coefficients, some of them, like $`\widehat{b}_1+\widehat{b}_2+\widehat{b}_3`$ or $`\widehat{b}_1+\widehat{b}_2+2\widehat{b}_3+\widehat{b}_{16}\widehat{b}_{15}`$, remain within natural sizes for these fits. Nevertheless, as we have commented, there is a considerable uncertainty in the precise values of the $`b_i`$ at present (see Table 2 in ). In summary, the main conclusion from Figure 1 is that it is possible to obtain an improved description of $`\pi N`$ scattering including the $`\mathrm{\Delta }(1232)`$, with the $`a_i`$ values obtained from resonance saturation. Note however, that the $`S_{31}`$ phase shift does not have any real improvement. For illustration we also give in Figure 1 the extrapolation of the $`O(q^3)`$ HBChPT results to high energies (dotted line) as well as the IAM result (dashed line), using the $`a_i`$ and $`b_i`$ values in . #### b Unconstrained IAM fits. Of course, we can get much better fits (all with $`\chi ^2/d.o.f.<1`$) by leaving all the parameters free. For illustration, see the dashed-dotted line in Figure 1. There are again strong correlations and the actual value of each one of the $`\widehat{a}_i`$ and $`\widehat{b}_i`$ could be extremely unnatural. The correlations now are even more complicated due to the quadratic $`t_2^2`$ term in the denominator of eq.(4). By inspection of the analytic formulas, we find that the $`\widehat{a}_1+\widehat{a}_2`$, $`\widehat{a}_54\widehat{a}_1`$, $`2(\widehat{b}_1+\widehat{b}_2)+(\widehat{b}_{16}\widehat{b}_{15})`$ combinations are the most relevant, and remain rather stable for these fits. However *it is not possible to obtain a meaningful determination of each individual parameter* without any other additional assumption (like resonance saturation). That is again due to the large number of parameters, but also to the slow HBChPT convergence. #### c The $`\mathrm{\Delta }(1232)`$ resonance. The IAM generates dinamically a pole in the second Riemann sheet at $`\sqrt{s}=(1212i\mathrm{\hspace{0.17em}47})\text{MeV}`$, which is rather stable within all the fits and in very good agreement with the data . #### d Threshold parameters. For definitions and notation we refer again to . Our results are shown in Table II, where we have also listed the experimental values, extracted from . As pointed out in and , the errors for those values are clearly underestimated. Hence, it should be borne in mind that the threshold parameters are not so well determined as it may seem from those errors (see also ). Our fits give a reasonably good agreement with experiment for the $`S`$-wave scattering lengths. For most $`P`$-waves, we agree with the order of magnitude and sign. Our results are also in rough agreement with , where they give: -0.07 GeV<sup>-1</sup> $`a_0^+`$ 0.04 GeV<sup>-1</sup> and 0.6 GeV<sup>-1</sup> $`a_0^{}`$ 0.67 GeV<sup>-1</sup>. ### B Conclusions and discussion We have unitarized the HBChPT $`𝒪(q^3)`$ $`\pi `$N elastic scattering amplitude with the Inverse Amplitude Method. This approach is able to describe the phase shifts up to the inelastic thresholds and, in addition, it gives the correct pole for the $`\mathrm{\Delta }(1232)`$ in the $`P_{33}`$ channel. Our fits use the extrapolated phase shifts between $`\sqrt{s}=`$ 1130 MeV and the corresponding inelastic thresholds. Within this approach, we can predict the thresholds values, which for the $`S`$ waves are in good agreement with experiment and with recent determinations. Unfortunately, since there are large correlations between some parameters, it is possible to obtain good fits with very different sets of parameters, which can have rather unphysical values. This is due to different reasons: a) The slow convergence of the series, since contributions from different orders are comparable in almost every partial wave. The effect of higher order terms, which was less relevant at threshold, is absorbed in our case in the values of the chiral coefficients. b) There are strong correlations between the parameters, and the fits are only sensitive to certain combinations. Hence the values of each individual coefficient are meaningless. The most relevant conclusion of this study is that we can still reproduce the $`\mathrm{\Delta }(1232)`$ with the $`a_i`$ values expected from the resonance saturation hypothesis, keeping a reasonably good description for the other channels. Finally, we would like to remark that the method developed here can be easily extended to the case of $`SU(3)`$ symmetry as well as to the coupled channel formalism. Further work along these lines is in progress. ## Acknowledgments Work partially supported by DGICYT under contract AEN97-1693 and PB98-0782. J.R.P. thanks J.A.Oller and E.Oset for useful discussions.
no-problem/9912/astro-ph9912226.html
ar5iv
text
# The Metallicity Dependence of RR Lyrae Absolute Magnitudes from Synthetic Horizontal-Branch Models ## 1 Introduction It has long been realized that determination of the absolute ages of globular star clusters in our Galaxy is most vulnerable to the uncertainty in their distances (see e.g. Renzini 1991). Because of the difficulty in determining the precise position of the unevolved main sequence in globular clusters, the RR Lyrae variables have been in recent years the preferred distance indicators for globular clusters, and used in the calibration of the $`\mathrm{\Delta }`$V method for comparing observed CMD’s to theoretical isochrones (Renzini 1991; Buonanno et al. 1994). In this method, the quantity $`\mathrm{\Delta }`$V is defined as the difference in V magnitude between the main sequence turnoff bluest point and the HB at the same color in the cluster CMD. This approach has the advantage of being independent of distance and interstellar reddening. The quantity $`\mathrm{\Delta }`$V can in principle be calibrated with theoretical isochrones since it relies primarily on the physics of the deep interior, which is believed to be relatively well-known, and less on uncertain assumptions about the efficiency of convection in the envelope and on color transformations (see e.g. the discussion of model uncertainties by Chaboyer et al. 1996a). The purpose of this paper is to present a new theoretical calibration of the HB luminosity based on updated stellar evolutionary sequences and synthetic HB (SHB) models, and to analyze the sensitivity of $`M_V(RR)`$ to \[Fe/H\]. Other parameters, such as HB morphology and input parameters used in the construction of SHBs, also affect the relation. The slope of the $`<M_V(RR)>`$-\[Fe/H\] relation is needed to discuss the relative ages of globular clusters and to determine whether there exists an age-metallicity correlation in the Galactic halo (Zinn 1985, 1993; Chaboyer, Demarque & Sarajedini 1996). Both the slope and zero-point (and their uncertainties) are also essential in discussions of the absolute ages of the globular clusters (Chaboyer et al. 1996, 1998; Demarque 1997). With the first parallaxes for field subdwarfs from the Hipparcos satellite now becoming available, there has been a flurry of new interest in using the main sequence to derive globular cluster distances independently from the RR Lyrae luminosity calibration (Reid 1997; Gratton et al. 1997; Salaris, Degl’Innocenti & Weiss 1997). In early studies, the distance modulus of a globular cluster was derived by fitting the cluster main sequence to a main sequence derived from field subdwarfs with the same metallicity as the cluster (e.g. see the review by Sandage 1986). But because the uncertainties in trigonometric parallaxes for field subdwarfs were large, and the photometry of globular cluster main sequence stars was very uncertain, this approach had generally been abandoned in recent years. Preliminary analyses of the Hipparcos data indicate that the subdwarfs are intrinsically more luminous than previously believed, and therefore that globular cluster distances have been underestimated in the past. If correct, this means a more luminous HB and a higher luminosity for the RR Lyrae variables. In view of the crucial importance of globular cluster distances in understanding the evolution of galaxies and cosmology, new theoretical SHB models have been constructed. ## 2 Evolutionary Tracks The evolutionary sequences presented here are quite similar to the many other HB models in the recent literature (Sweigart 1987; Lee & Demarque 1990; Dorman, Lee & VandenBerg 1991; Dorman 1992; Yi, Lee & Demarque 1993; Castellani et al. 1994; Caputo & Degl’Innocenti 1995; Mazzitelli, D’Antona & Caloi 1995), except for recent updates in the opacities and equation of state (Rodgers 1986; Iglesias & Rodgers 1996; Rodgers, Swenson & Iglesias 1996). The evolutionary sequences are adopted from Yi, Demarque & Kim (1997) but extended to a finer grids of metallicity and mass, i.e. in the range 0.52 – $`0.92M_{}`$ at $`0.02M_{}`$ intervals, for the following metallicities: Z = 0.0001, 0.0002, 0.0004, 0.0007, 0.001, 0.002, and 0.004, corresponding to \[Fe/H\] between $`2.3`$ and $`0.7`$. The helium content by mass $`Y`$ was taken from the evolutionary tracks that were used for the new Yale Isochrones<sup>1</sup><sup>1</sup>1See Yi et al. at http://www.shemesh.gsfc.nasa.gov/ for both HB tracks and isochrones used in this study., corresponding to an initial $`Y=0.23`$. The main difference in the input physics with the Lee & Demarque (1990) models is the introduction of the OPAL opacities and equation of state (Iglesias & Rodgers 1996; Rodgers, Swenson & Iglesias 1996). As a result, for $`Y_{MS}=0.23`$, the current HB models of Yi et al. (1997) used in this study are approximately 0.05 – 0.1 mag. fainter than those of Lee & Demarque (1990). The Yi et al. models are approximately 0.02 – 0.05 mag. fainter than those of Caloi et al. (1997) and Cassisi et al.(1998) on the zero-age HB (ZAHB) in the instability strip at \[Fe/H\] $`=2`$. This difference seems to be caused mostly by the fact that the helium core masses of the Yi et al. models are smaller, by 2 – 5%, than the core masses of the Caloi et al. and Cassisi et al. models. Table 1 lists the helium abundances in the envelope and the helium core masses for given metallicities used in the Yi et al HB models. These values are obtained from stellar models at the onset of helium ignition at the tip of the giant branch, which included the effects of the OPAL opacities, and the same equation of state as in Guenther et al. (1992). ## 3 Synthetic Horizontal-Branch (SHB) Models The SHB models were derived using the technique introduced by Rood (1973), and extended by Lee et al. (1990). The mass distribution on the HB is defined by the following truncated Gaussian distribution: $$\mathrm{\Psi }(M)=\mathrm{\Psi }_0[M(\overline{M_{HB}}\mathrm{\Delta }M)](M_{RG}M)exp[\frac{(\overline{M_{HB}}M)^2}{2\sigma ^2}]$$ (1) where $`\sigma `$ is a mass dispersion factor in solar mass, $`\mathrm{\Psi }_0`$ is a normalization factor, and $`\overline{M_{HB}}`$ ($`M_{RG}\mathrm{\Delta }M`$) is the mean mass of HB stars. Three values of $`\sigma `$ (0.01, 0.02 and 0.03 solar masses) have been chosen, where the preferred value $`\sigma `$ = 0.02 is the mean of a representative group of clusters (see Table 1 in Lee 1990), and 0.01 and 0.03 were added to illustrate the sensitivity of the SHB models to the choice of $`\sigma `$. When considering the properties of stars in the RR Lyrae instability strip, it is convenient to introduce the HB Type index, which is defined as the ratio (B-R)/(B+V+R), where B, V, and R are the numbers of blue HB stars, RR Lyrae variables, and red HB stars, respectively. This parameter is convenient in classifying HB morphology; we note that (B-R)/(B+V+R) ranges from $`1`$, for clusters that display only a red HB (e.g. 47 Tuc), to $`+1`$, for clusters containing only blue HB stars (e.g. NGC 6752). Clusters that contain nearly equal numbers of red and blue HB stars (e.g. M3), are assigned values of (B-R)/(B+V+R) near 0. One of the advantages of this morphology index is that it includes the RR Lyrae variables, thus distinguishing between two clusters with the same numbers of blue or red HB stars, but differing in their RR Lyrae populations. Figure 1 illustrates the dependence of the mean RR Lyrae visual absolute magnitude $`<M_V(RR)>`$ on metallicity and on HB Type, based on the SHB models constructed with $`\sigma =0.02`$. The colors and bolometric corrections were taken from Green et al. (1987) As discussed earlier, the use of improved physics reduces the luminosities of SHBs for a given helium abundance. At \[Fe/H\] = $`1.9`$, and for HB type 0, the models yield $`M_V(RR)`$ = 0.47 $`\pm 0.10`$. This result is consistent with Walker’s (1992) value for the LMC RR Lyrae variables which is based on the classical cepheid distance scale, and with the distance of SN1987a (Panagia et al. 1991; Gould 1995; Sonneborn 1997). Using observational data, Fusi Pecci et al. (1992) concluded that (B-R)/(B+V+R) depends primarily on the location of the peak of the color distribution of the HB, and only slightly on the dispersion in color of the HB stars. This conclusion is confirmed by the theoretical SHB models. For a fixed metallicity, the run of $`M_V(RR)`$ predicted by the SHB models as a function of HB Type is found to depend weakly on the choice of the mass dispersion $`\sigma `$ in eq. (1). This is illustrated in Figure 2 where RR Lyrae magnitudes for two extreme values of $`\sigma `$, 0.01 and 0.03, are plotted. The peculiar behavior near HB type 0.9 is caused by the particuler metallicity dependence of the vertical width of the HB tracks. This vertical width is narrower for Z = 0.0002 than for Z = 0.0004. Thus although the ZAHB luminosity is brighter for Z = 0.0002, the evolved RR Lyrae variables of Z = 0.0004, which are near the end of their HB tracks, could be brighter than those for Z = 0.0002. Increasing $`\sigma `$ will dilute this effect. As a result, we do not see this behavior al larger $`\sigma `$ in Figure 2. ## 4 Is there a Universal Slope to the $`<M_V(RR)>`$-\[Fe/H\] Relation? The dependence of $`<M_V(RR)>`$ on \[Fe/H\] is needed to derive the relative ages of globular clusters of different metallicities. It is critical in deriving the chronology of the Galactic halo, its chemical enrichment, and in particular the possible existence of an age-metallicity correlation in the halo. It is customary to assume a linear relation between $`M_V(RR)`$ and \[Fe/H\], i.e. to write: $$M_V(RR)=\mu [Fe/H]+\gamma $$ (2) where, when used for globular cluster dating, the slope $`\mu `$ affects the relative ages of clusters of different metallicities, and both $`\mu `$ and $`\gamma `$ determine the absolute ages. There has been much debate about the value of the slope $`\mu `$ over the years. Recently, from an analysis of the Oosterhoof-Sawyer period shift effect in globular clusters, Sandage (1993) has derived a “steep” $`\mu =0.30\pm 0.12`$, while studies based on the Baade-Wesselink method of determining the absolute magnitude of variable stars have yielded a “shallow” slope $`\mu `$, in the vicinity of 0.20 or less; Jones et al. (1992) derived $`\mu =0.16\pm 0.03`$ and Skillen et al.(1993) determined $`\mu =0.21\pm 0.05`$. Recent analyses of these data by Sarajedini et al. (1997) and by Fernley et al. (1998) yielded $`\mu =0.22\pm 0.05`$ and $`0.20\pm 0.04`$, respectively. HST observations of the HB luminosity of three clusters in M31 (by Ajhar et al. 1996) have yielded a very shallow value of $`\mu `$ = 0.08 $`\pm 0.13`$. Also using HST observations of the CMD’s of eight globular clusters in M31, Fusi Pecci et al. (1996) derived $`<M_V>`$ = (0.13 $`\pm 0.07`$)\[Fe/H\] + (0.95$`\pm 0.09`$) for the mean magnitude of the HB in the instability strip. Theoretical estimates have consistently yielded $`\mu `$ values in the range 0.18-0.20 for models near the ZAHB (Lee et al. 1990; Salaris et al. 1997). Discussions of the theoretical $`<M_V(RR)>`$-\[Fe/H\] relation are frequently made using ZAHB models (e.g., Caloi et al. 1997). Sometimes an evolutionary correction is applied to take into account the fact that RR Lyrae variables are not observed in their original ZAHB position, and have evolved both in color and magnitude (Carney et al. 1992). Synthetic HB models are needed to provide a realistic description of these evolutionary corrections, which are found to differ significantly depending on the mass and the chemical composition of the models. Furthermore, only with SHB models is it possible to evaluate the effects of HB morphology on the value of $`<M_V(RR)>`$-\[Fe/H\] for a given metallicity, as was done originally by Lee (1991). In his study of the RR Lyrae luminosities in $`\omega `$ Cen, Lee (1991) pointed out that $`M_V(RR)`$ is not a unique function of metallicity, particularly at the lowest metallicities. The effect is particularly marked for clusters with very blue stars on the HB, corresponding to HB Types approaching +1. Figure 1 updates the original Lee calibration. It is clear from Figure 1 that there is nothing universal about the value of $`\mu `$, and great caution should be used when applying equation (2) to derive RR Lyrae magnitudes without taking into account the HB morphology type of the population to which they belong (see also Caputo 1997). To examine this point in more detail, let us consider first the hypothetical case where the HB is evenly populated with stars over a wide range in \[Fe/H\]. Since there is very small variation in $`<M_V(RR)>`$ over the range in HB type -0.5 to +0.5 (see Figs. 1 and 2), we may use the calculations for HB type = 0.0 to approximate this case. The resulting relationship between $`<M_V(RR)>`$ and \[Fe/H\] is shown in Figure 3, where one can see that there are small variations in slope over narrow ranges in \[Fe/H\] (see also Caputo 1997). The slopes found from our calculations are listed in Table 2. Since most observational studies have considered variables that span a wide range in \[Fe/H\], e.g., -2.2 to -0.5, it is the slope over a wide range that is of most interest. While the relationship given by our calculations is non-linear, the departures from a straight-line fit are small (see Figure 3) and would be very hard to detect observationally. The slope of the line in Figure 3 (0.21) is similar to that given by ZAHB calculations, but the zero-point (0.89) is somewhat brighter because the RR Lyrae variables have evolved from the ZAHB. It is expected that this $`<M_V(RR)>`$-\[Fe/H\] relationship will not apply to all stellar populations because HB morphology changes with \[Fe/H\], the so-called first parameter, and also varies at constant \[Fe/H\], the second parameter effect. This is most easily illustrated by considering the relationships expected for the HB morphologies of the globular clusters lying in different radial zones in the Milky Way. Previous investigations (e.g., Searle & Zinn 1978, Lee et al. 1994) have shown that the globular clusters of the inner halo and bulge exhibit a tight relationship between HB morphology and \[Fe/H\], which may mean that the second parameter effect is absent or weak among these clusters. The top graph in Figure 4 shows this relationship for the globular clusters that have galactocentric distances, $`R_{gc}`$$`6kpc`$ (data from the 1999 June 22 revision of the Harris 1996 catalogue). To derive the $`<M_V(RR)>`$-\[Fe/H\] relationship for this group of clusters, we estimated the mean HB type of the clusters at the metallicities of our calculations and then derived $`M_V(RR)`$ from the synthetic HB for that HB type and \[Fe/H\]. When estimating the mean HB type, lower weight was given to the clusters with HB type $`1.0`$, because these very blue HB clusters contain few, if any, RR Lyrae variables. To obtain additional points, we interpolated in \[Fe/H\] midway between the values of our calculations. Figure 4 shows the mean HB types (x’s) used in this procedure and the resulting values of $`<M_V(RR)>`$ are plotted against \[Fe/H\] in the top diagram of Figure 5. Because the globular clusters with \[Fe/H\] $`<1.5`$ have exclusively very blue HB types, in this \[Fe/H\] range $`M_V(RR)`$ is significantly brighter than the HB type = 0 case (the dashed line). As Figure 5 illustrates, this produces a very steep ($`\mu =0.36`$) $`<M_V(RR)>`$-\[Fe/H\] relationship. There is not a tight relationship between \[Fe/H\] and HB morphology among the globular clusters in the outer halo because of the second parameter effect. To obtain a sufficiently large sample of cluters to illustrate this, it is necessary to consider a wide range in $`R_{gc}`$ because the number density of clusters falls off steeply with increasing $`R_{gc}`$. The clusters having $`6<`$$`R_{gc}`$$`20kpc`$ are plotted in the lower diagram of Fig. 4, where the diversity in HB types among the metal-poor clusters is obviously much larger than in the inner halo. Following the same procedures as before, we have estimated the mean HB types of the clusters at different values of \[Fe/H\] and have estimated values of $`M_V(RR)`$ from our synthetic HB calculations. The resulting $`M_V(RR)[Fe/H]`$ relationship is shown in the lower diagram of Fig. 5. In contrast with the relationship for the inner halo, this one ($`\mu =0.22`$, $`\gamma =0.90`$) deviates only slightly from the case where HB type = 0 for all \[Fe/H\] (see Fig.3). On the basis of Figs. 4 and 5, we conclude that a universal $`<M_V(RR)>`$-\[Fe/H\] relation does not exist because $`<M_V(RR)>`$ depends on HB morphology as well as \[Fe/H\] and because the relationship between HB morphology and \[Fe/H\] varies with the stellar population being considered. While this last point is best illustrated by the globular clusters in the inner and outer halo (Fig. 4), the recent work on the color-magnitude diagrams of the globular clusters in M31 (Fusi Pecci et al. 1996), M33 (Sarajedini et al. 1998), the LMC (Olsen et al. 1998), and the Fornax dwarf spheroidal galaxy (Buonanno et al. 1998) also show significant variations in the HB type - \[Fe/H\] relation from system to system. The data on these cluster systems, which for some of them is far from complete, suggest that the inner halo galactic may be the most extreme example where over a wide range of \[Fe/H\] only very blue HB types are found. Therefore, stellar populations in which $`<M_V(RR)>`$-\[Fe/H\] relation is as steep as the one for the inner halo clusters may be rare. Nonetheless, one should be cautious when adopting a $`<M_V(RR)>`$-\[Fe/H\] relation without information on the variation of HB type with \[Fe/H\] in the stellar population. It is important to consider if the debate over the slope and zero-point of the $`<M_V(RR)>`$-\[Fe/H\] relation is partially due to the non-universality of this relation. The relation found here for the inner halo clusters resembles in slope the steep relationships that Sandage (1993 and references therein) obtained during his more than a decade long analyses of the Oosterhoff effect among the galactic globular clusters. We doubt, however, that they are related. Most of the clusters Sandage used in his analysis are rich in RR Lyrae variables and do not have extremely blue HBs. Our models predict that the $`<M_V(RR)>`$-\[Fe/H\] relation for these clusters is not steeply sloped. The $`<M_V(RR)>`$-\[Fe/H\] relation that Fusi Pecci et al. (1996) obtained from the color-magnitude diagrams of 8 globular clusters in M31 represents the other extreme, for they obtained the very shallow slope of $`\mu =0.13\pm 0.07`$. In only the 3 most metal poor clusters of this sample is the HB morphology sufficiently blue to populate the instability strip with RR Lyrae variables, and these clusters do not have very blue HB types in spite of \[Fe/H\] $`1.8`$ to $`1.5`$ (compare with the top diagram of Fig. 4). The other 5 clusters in this M31 sample have HB morphologies too red for RR Lyrae variables, and for them Fusi Pecci et al. had to resort to estimating the HB level at the instability strip from the observed red HB. Fusi Pecci et al. point out that their conclusion that $`\mu `$ is small does not depend critically on this uncertain procedure. For this sample of M31 clusters, our calculations predict a $`<M_V(RR)>`$-\[Fe/H\] relation similar to the HB type = 0 case; hence $`\mu 0.21`$. While this value is larger than the value obtained by Fusi Pecci et al. (1996), it is barely within one standard deviation of it. As Fusi Pecci et al. suggest, a larger sample of M31 clusters must be observed before one can be confident that the slope of the M31 relation is indeed incompatible with theoretical calculations such as ours. The zero-point $`\gamma `$ of the M31 relation is fixed entirely by the distance adopted for M31, and the value obtained by Fusi Pecci et al. (1996) (0.95$`\pm 0.09`$) is consistent with our calculations. Of course, the $`<M_V(RR)>`$-\[Fe/H\] relation has also been investigated using samples of field RR Lyrae variables lying within a few kiloparsecs of the Sun. Some of these variables are members of the galactic halo, while others, preferentially the more metal-rich ones, are members of the thick disk population. Because it is difficult to recognize red HB stars in the field, the HB type - \[Fe/H\] relations of these populations are poorly known. The work by Preston, Shectman and Beers (1991) on the field HB stars and by Caputo (1993) on the RR Lyrae variables suggests that the HB morphology of the field may vary with $`R_{gc}`$ in much the same way as the HB morphology of the globular clusters. Since a few metal-poor clusters with HB type $`<0.9`$ have values of $`R_{gc}`$ that are not much different from the Sun’s, we suspect the HB type - \[Fe/H\] relation of the field population near the Sun may resemble more the lower diagram of Fig. 4 than the upper. If this is correct, our models suggest that $`\mu 0.22`$. This is close to the results obtained from applying the Baade-Wesselink method to samples of field RR Lyrae variables (see above). The values of $`\gamma `$ from the Baade-Wesselink analyses, which are considered more susceptible to systematic error than $`\mu `$, are about 0.1 mag. fainter than the value given by our calculations (see Fernley et al. 1998). In principle, the application of the method of statistical parallax to samples of field RR Lyrae variables should also yield the $`<M_V(RR)>`$-\[Fe/H\] relation. To date the samples of stars have proved inadequate for determining both $`\mu `$ and $`\gamma `$, but precise values of $`<M_V(RR)>`$ have been obtained at the mean \[Fe/H\] of the sample of halo RR Lyrae variables. Recent results by Layden et al. (1996) and by Gould & Popowski (1998) have yielded $`M_V(RR)`$ of $`+0.71\pm 0.12`$ and $`+0.77\pm 0.13`$ at \[Fe/H\]$`1.6`$, respectively. Our calculations give brighter values with the exact value depending on which HB type - \[Fe/H\] relation is adopted. For the outer halo one, which may be appropriate for the variables near the Sun, $`M_V(RR)=+0.55`$ at \[Fe/H\]$`=1.6`$, which slightly more than one sigma brighter than the results from statistical parallax. We have no explanation for this difference, which if confirmed, may mean that some revision of the models is required. While the dependence of $`M_V(RR)`$ on HB morphology may not have a large effect on the either the Baade-Wesselink or the statistical parallax analyses, it is probably a contributing factor to the scatter in $`M_V(RR)`$ among the metal-poor variables (see Jones et al. 1992; Fernley et al. 1998). ## 5 Discussion The dependence of $`<M_V(RR)>`$ on HB morphology may have a significant effect upon astrophysical problems involving the distance scale for old stellar populations. We concentrate here on the question of the ages of globular clusters, which is important for both galactic evolution and for setting the minimum age of the universe. Our results suggest that assuming the same $`<M_V(RR)>`$-\[Fe/H\] relation for clusters of all HB types will significantly underestimate the distances to the metal-poor clusters having very blue HB morphologies. The luminosities of their main-sequence turnoffs will be underestimated and their ages will be overestimated. This effect is greatest for the clusters having the very bluest HB types ($`1.0`$) (see also Storm et al. 1994; Clement et al. 1999). To illustrate this, let us consider the globular cluster M92 (NGC 6341), which is one of the most metal poor globular clusters (\[Fe/H\]=-2.24, Zinn & West 1984) and perhaps one of the oldest. While M92 has a blue HB, it is not extremely blue (HB type = 0.88, Lee et al. 1994), and one might think that it is immune to this systematic error. According to our models, for this HB type $`<M_V(RR)>`$ is approximately 0.07 mag. brighter than the HB type = 0 case. Therefore, if the distance modulus of M92 is set using a $`<M_V(RR)>`$-\[Fe/H\] relation that is appropriate for HB types in the range -0.5 to +0.5, then the luminosity of its main-sequence turnoff will be underestimated by about 0.07 mag. This will cause its age to be overestimated by about 1 Gyr. This $``$ 10% error in cluster age may appear small, it is nonetheless significant for either setting the lower limit on the age of the universe or for ascertaining the dispersion in age among the globular clusters and thereby distinguishing between different scenarios for the formation of the galactic halo. The theoretical prediction (see also Lee et al. 1990, Caputo 1997, and refs. therein) that the RR Lyrae variables in M92 are highly evolved HB stars is consistent with the observational results of Storm et al. (1994) who measured the absolute magnitudes of two variables in M92 using the Baade-Wesselink technique. The mean value of $`<M_V(RR)>`$ that they obtained for these two stars is brighter than the $`<M_V(RR)>`$-\[Fe/H\] relation they derived for field RR Lyrae variables by the same technique by 0.21$`\pm `$0.15 mag. While we predict a smaller offset, our result is within the errors of the value obtained from this very small sample. Often observers will not use the few RR Lyrae variables in a blue HB cluster to set the apparent magnitude of the HB, but will use instead the reddest non-variable stars on the blue HB. SHB models (see fig. 1, 2, & 3 in Lee et al. 1990) indicate that these stars have also evolved far from the ZAHB, but somewhat less so than the RR Lyrae variables. At best, this practice will remove only partially the need to take into account HB morphology when estimating the distance modulus of a cluster. Care must also be exercised when turning this problem around and using a globular cluster of known distance to set the luminosity of RR Lyrae variables. For example, the globular cluster NGC 6752 has an extremely blue HB and lacks RR Lyrae variables. Its distance modulus has been measured from white dwarf fitting (Renzini et al. 1996). The absolute magnitude of the blue HB stars in NGC 6752 should not be assigned without correction to RR Lyrae variables of its metallicity. Finally, we note that any detailed comparison between theory and observation is still hampered by empirical uncertainties on cluster distances as well as on individual cluster metallicities. Partial support for this work was provided by NASA grant NAG5-8406 (P.D.), NSF grant AST-9319229 (R.Z.), and the Creative Research Initiative program of the Korean Ministry of Science & Technology (Y.-W.L. and S.Y.). FIGURE CAPTIONS
no-problem/9912/astro-ph9912007.html
ar5iv
text
# Baryon Distribution in Galaxy Clusters as a Result of Sedimentation of Helium Nuclei ## 1. Introduction An accurate determination of the total gravitating mass of galaxy clusters is crucial for the ‘direct’ measurements of the cosmic mass density parameter $`\mathrm{\Omega }_M`$ through the mass-to-light technique (e.g. Bahcall, Lubin & Dorman 1995) and the baryon fraction method (e.g. White et al. 1993). The latter has received much attention in recent years because of the rapid progress in X-ray astronomy and particularly the large spatial extension of the hot and diffuse X-ray emitting gas in clusters. In the conventional treatment, the volume-averaged baryon fraction $`f_b`$, defined as the ratio of the gas mass $`M_{\mathrm{gas}}`$ to the total mass $`M_{\mathrm{tot}}`$ of a cluster, is obtained by assuming a thermal bremsstrahlung emission and hydrostatic equilibrium for the intracluster gas. While such an exercise has been done for almost every X-ray selected cluster with known temperature, the resultant baryon fractions show rather a large dispersion among different clusters. In particular, $`f_b`$ appears to be a monotonically increasing function of cluster radius, and thereby cannot, in principle, be used for the determination of the cosmic density parameter $`\mathrm{\Omega }_M`$ since the asymptotic form of $`f_b`$ at large radii does not approach a universal value (White & Fabian 1995; David et al. 1995; Markevitch & Vikhlinin 1997a,b; White et al. 1997; Ettori & Fabian 1999; Nevalainen et al. 1999; Markevitch et al. 1999; Wu & Xue 1999). This leads to an uncomfortable situation that a greater fraction of dark matter is distributed at small scales than at large scales. It is commonly believed that these puzzles have arisen from our poor understanding of the local dynamics inside clusters such as cooling/non-cooling flows, substructures, mergers, etc. Yet, a satisfactory explanation has not been achieved. The conventional cluster mass estimate from X-ray measurement of intracluster gas assumes a constant mean molecular weight $`\mu `$. That is, all the chemical elements of the X-ray emitting gas obey exactly the same spatial distribution. Under this assumption, the total mass of the cluster $`M_{\mathrm{tot}}\mu ^1`$, while the mass in gas $`M_{\mathrm{gas}}\mu `$. Consequently, the cluster baryon fraction $`f_b(M_{\mathrm{gas}}/M_{\mathrm{tot}})`$ depends sensitively on the value of $`\mu `$. It appears that our estimates of total mass and baryon fraction of a cluster would be seriously affected if its chemical elements do not share the same spatial distribution. The Boltzmann distribution of particles in a gravitational field $`\varphi `$ follows $`n\mathrm{exp}(m\varphi /kT)`$, where $`n`$ and $`m`$ are the number density and the individual mass of the particles, respectively. The heavier a particle is, the slower its thermal motion will be. So, heavy ions in the intracluster gas will have a tendency to drift toward the cluster center. As a consequence of this sedimentation of heavy ions toward the central region, given that the cluster has survived for a sufficiently long time in the Universe, $`\mu `$ will no longer be a constant in the cluster. If the intracluster gas is entirely composed of hydrogen and helium with their primordial abundances, the value of $`\mu `$ will be higher at cluster center while asymptotically reaches 0.5 at large radii for a fully ionized hydrogen gas. The chemically inhomogeneous distribution in galaxy clusters was initiated by Fabian & Pringle (1977), who studied the sedimentation of iron nuclei. They found that the iron nuclei in the X-ray gas may settle into the cluster core within a Hubble time. Taking into account the collisions of iron nuclei with helium nuclei, Rephaeli (1978) argued that a much longer sedimentation time than the Hubble time was required so that the iron nuclei would not have enough time to sediment into the core. In this Letter we wish to focus on helium nuclei instead of iron nuclei. Since the drift velocity $`v_DAZ^2`$ (e.g. Fabian & Pringle 1977), where $`A`$ and $`Z`$ are the atomic weight and the charge of the ion, respectively, the helium nuclei will settle much faster than the iron nuclei, and thus the sedimentation may eventually take place. Indeed, Abramopoulos, Chana & Ku (1981) have calculated the equilibrium distribution of the elements in the Coma cluster, assuming an analytic King potential, and found that helium and other heavy elements are strongly concentrated to the cluster core. In particular, Gilfanov & Sunyaev (1984) have demonstrated that the diffusion of elements in the X-ray gas may significantly enhance the deuterium, helium, and lithium abundances in the core regions of rich clusters of galaxies. For simplicity, we assume that the intracluster gas consists of hydrogen and helium, and then demonstrate their equilibrium distributions in the gravitational potential characterized by the universal density profile (Navarro, Frenk & White 1995, 1997; hereafter NFW). As will be shown, under the scenario of helium sedimentation, the baryonic matter in clusters can be more centrally distributed than the dark matter, in contradiction to what is commonly believed. This may open a possibility to solve the puzzle for an increasing baryon fraction with cluster radius. ## 2. Sedimentation of helium nuclei in galaxy clusters Our working model is based on the assumption that the intracluster gas composed of hydrogen and helium is in hydrostatic equilibrium at a temperature $`T`$ with the underlying gravitational potential. This can be justified by a simple estimate of their relaxation time $`t_{\mathrm{eq}}`$ (Spitzer 1978) $`t_{\mathrm{eq}}`$ $`=`$ $`{\displaystyle \frac{3m_pm_\alpha k^{3/2}}{8(2\pi )^{1/2}n_pZ_p^2Z_\alpha ^2e^4\mathrm{ln}\mathrm{\Lambda }}}\left({\displaystyle \frac{T}{m_p}}+{\displaystyle \frac{T}{m_\alpha }}\right)^{3/2}`$ (1) $`=`$ $`6\times 10^6\mathrm{yr}\left({\displaystyle \frac{n_p}{10^3\mathrm{cm}^3}}\right)^1\left({\displaystyle \frac{T}{10^8\mathrm{K}}}\right)^{3/2},`$ where $`m_p`$, $`m_\alpha `$, $`Z_p`$, and $`Z_\alpha `$ are the masses and the charges of proton and helium nucleus, respectively. $`n_p`$ is the proton number density, and the Coulomb logarithm is taken to be $`\mathrm{ln}\mathrm{\Lambda }40`$. Obviously, $`t_{\mathrm{eq}}`$ is much shorter than the present age of the Universe, and hence, the protons and the helium nuclei are readily in hydrostatic equilibrium at the same temperature. According to the Boltzmann distribution of particles in a gravitational field, heavy particles tend to be more centrally distributed in galaxy clusters than light particles. Therefore, with respect to protons, helium nuclei will tend to drift toward the cluster center. An immediate question is: Is the drift velocity sufficiently large for the helium nuclei to have settled into the cluster core within the Hubble time ? The drift velocity $`v_D`$ of helium nuclei can be estimated through (Fabian & Pringle 1977) $`v_D`$ $`=`$ $`{\displaystyle \frac{3A_\alpha m_p^{1/2}g(2kT)^{3/2}}{16\pi ^{1/2}Z_\alpha ^2e^4n_p\mathrm{ln}\mathrm{\Lambda }}}`$ (2) $`=`$ $`8.8\times 10^6\mathrm{cm}/\mathrm{s}\left({\displaystyle \frac{g}{3\times 10^8\mathrm{cm}\mathrm{s}^2}}\right)`$ $`\times \left({\displaystyle \frac{n_p}{10^3\mathrm{cm}^3}}\right)^1\left({\displaystyle \frac{T}{10^8\mathrm{K}}}\right)^{3/2},`$ where $`g`$ is the gravitational acceleration. Eq.(2) indicates that within the Hubble time the helium nuclei can drift a distance $$r_D=1.8\mathrm{Mpc}(\frac{g}{3\times 10^8\mathrm{cm}\mathrm{s}^2})(\frac{n_p}{10^3\mathrm{cm}^3})^1(\frac{T}{10^8\mathrm{K}})^{3/2}h_{50}^1,$$ (3) which is indeed comparable to cluster scales. Moreover, as $`r_Dgn_p^1`$, the value of $`r_D`$ (and $`v_D`$) increases rapidly with cluster radius $`r`$. Therefore, Eq.(3) suggests that the majority of the helium nuclei have probably sedimented into the cluster core within the Hubble time. Note that, due to the requirement of electrical neutrality, the electrons of the same charge will simultaneously sediment along with the helium nuclei. Here, we have not considered the effects of magnetic fields and subcluster mergers, which may somewhat retard the sedimentation of helium nuclei (Rephaeli 1978; Gilfanov & Sunyaev 1984). The sedimentation of helium nuclei in galaxy clusters will lead to a dramatic change of baryonic matter distribution. Consequently, the determination of gas and total mass of a cluster will be affected through the mean molecular weight $`\mu `$. Another significant effect is that a sharp peak in the X-ray emission concentrated in cluster core will be expected due to the electron - helium nucleus radiation. This provides an alternative scenario for the ‘cooling flows’ seen in some clusters, for which a detailed investigation will be presented elsewhere. In the present Letter, we only focus on the dynamical effect. In the extreme case, the X-ray gas will be helium-dominated at cluster center, while hydrogen-dominated at large radii. The mean molecular weight $`\mu `$, which is commonly used as a constant, will be a decreasing function of the cluster radius. At cluster center, $`\mu `$ reaches $`4/3`$, the value for a fully ionized helium gas, while at large radii, $`\mu `$ approaches $`0.5`$, the value for a fully ionized hydrogen gas. The conventional mass estimate from X-ray measurement assumes a constant mean molecular weight of $`\mu =0.59`$. As the total dynamical mass $`M_{\mathrm{tot}}`$ of a cluster is uniquely determined by the intracluster gas at large radii, the difference between $`M_{\mathrm{tot}}^c`$ (the value from the conventional method with $`\mu =0.59`$) and $`M_{\mathrm{tot}}`$ as a result of helium sedimentation is simply $$M_{\mathrm{tot}}=\frac{0.59}{0.5}M_{\mathrm{tot}}^c=1.18M_{\mathrm{tot}}^c.$$ (4) This indicates that the conventional method using X-ray measurement of intracluster gas, together with a constant mean molecular weight, may have underestimated the total cluster mass by $`20\%`$, which in turn, results in an overestimate of the total baryon fraction by the same percentage. While the effect of the helium concentration toward cluster center can alter the gas distribution, the total mass in gas of the whole cluster remains unaffected because of the mass conservation. ## 3. Gas distribution under the NFW potential We now demonstrate how hydrogen and helium are distributed in clusters described by the NFW profile $$\rho =\frac{\rho _s}{(r/r_s)(1+r/r_s)^2}.$$ (5) As has been shown by Makino et al. (1998), such a potential results in an analytic form of gas number density: $$n_{\mathrm{gas}}(x)=n_{\mathrm{gas}}(0)e^{\eta _{\mathrm{gas}}}(1+x)^{\eta _{\mathrm{gas}}/x},$$ (6) where $`x=r/r_s`$, and $`\eta _{\mathrm{gas}}=4\pi G\mu m_p\rho _sr_s^2/kT`$. Except for the small core radius, $`n_{\mathrm{gas}}(x)`$ is well approximated by the conventional $`\beta `$ model. If we neglect the interaction between protons and helium nuclei, then $$n_p=n_{p0}e^\eta (1+x)^{\eta /x}$$ (7) $$n_\alpha =n_{\alpha 0}e^{8\eta /3}(1+x)^{8\eta /3x}$$ (8) where $`\eta =2\pi Gm_p\rho _sr_s^2/kT`$. Eqs (7) and (8) give $`n_\alpha n_p^{8/3}`$, which differs from the prediction by Gilfanov & Sunyaev (1984), $`n_\alpha n_p^6`$. The discrepancy is probably due to the fact that we have not accounted for the diffusion-induced electric fields. We display in Fig.1 the radial distributions of protons and helium nuclei as well as their combined result for a typical nearby cluster with $`kT=7`$ keV, $`\eta _{\mathrm{gas}}=10`$ and $`r_s=1`$ Mpc (e.g. Ettori & Fabian 1999). It is apparent that the intracluster gas is dominated by different elements at different radius ranges: Within the core radius of a few tenth of $`r_s`$, the number density of helium is about four times larger than that of hydrogen because of the sedimentation of helium nuclei, giving rise to a significant excess of both mass in gas (see Fig.2) and X-ray emission in the central region of the cluster relative to the conventional model. Outside the core radius, helium profile shows a sharp drop and protons become the major component of the gas. It has been claimed that the total gas distribution can be approximated by the $`\beta `$ model (Makino et al. 1998). In fact, it is easy to show that both the narrow (helium) and extended (hydrogen) components of intracluster gas can be fitted by the $`\beta `$ models with different $`\beta `$ parameters. This provides a natural explanation for the double $`\beta `$ model advocated recently for the cooling flow clusters (e.g. Ikebe et al. 1996; Xu et al. 1998; Mohr, Mathiesen & Evrard 1999; etc.). Also plotted in Fig.1 is the mean molecular weight calculated by $`\mu =(n_p+4n_\alpha )/(2n_p+3n_\alpha )`$. The constant mean molecular weight is now replaced by a decreasing function of radius. The asymptotic values of $`\mu `$ at small and large radii are $`4/3`$ and $`1/2`$, respectively. We present in Fig.2 a comparison of gas masses determined by the invariant and variant mean molecular weights using the same cluster parameters as in Fig.1. To facilitate the comparison, we require that the total particles within $`r_{200}`$ remains unchanged, where $`r_{200}`$ is the radius within which the mean cluster mass density is 200 times the critical mass density of the Universe ($`\mathrm{\Omega }_0=1`$). It appears that the sedimentation of helium nuclei leads to a remarkable concentration of baryonic matter towards cluster center, which challenges the conventional prediction that the baryon fraction increases monotonically with radius (Fig.3). This opens a possibility that the asymptotic baryon fraction at large radii may match the universal value defined by the Big Bang Nucleosysthesis, although a detailed investigation will still be needed. ## 4. Conclusions Intracluster gas is mainly composed of hydrogen and helium. Their average abundances over a whole cluster should be of the cosmic mixture. However, their spatial distributions are entirely determined by the underlying gravitational potential of the cluster, and thus follow the Boltzmann distribution. On the other hand, clusters are believed to have formed at redshift $`z1`$. Therefore, the helium nuclei in clusters may have entirely or partially sedimented into the central cores of clusters today. This will lead to a significant change of the radial distributions of gas and baryon fraction. In the present Letter, we have only discussed the impact on the dynamical aspect of clusters. We will present elsewhere the effect on the cluster X-ray emission (e.g., cooling flows, the double $`\beta `$ model, etc.) as a result of the sedimentation of helium nuclei. In the conventional treatment where the mean molecular weight is assumed to be constant, one may have underestimated the total dynamical mass of clusters by $`20\%`$. Using a more vigorous way in which the NFW profile is taken as the background gravitational field of clusters, we have studied the hydrogen and helium distributions. Indeed, the sedimentation of helium nuclei toward cluster centers has significantly changed the distribution of intracluster gas, with gas being more centrally concentrated than dark matter. This may open a possibility to resolve the puzzle that the baryon fraction increases monotonically with radius predicted by the conventional model. In a word, a number of cosmological applications of the dynamical properties of clusters will be affected by the sedimentation of helium nuclei if it has really taken place during the evolution of clusters, which includes the determination of $`\mathrm{\Omega }_M`$ through $`f_b`$, the constraints on the cosmological models through X-ray luminosity - temperature relation, etc. A detailed theoretical study, together with the observational constraints, will be made in subsequent work. We thank an anonymous referee for helpful comments. This work was supported by the National Science Foundation of China, under Grant No. 1972531.
no-problem/9912/patt-sol9912003.html
ar5iv
text
# Dipole-Mode Vector Solitons \[ ## Abstract We find a new type of optical vector soliton that originates from trapping of a dipole mode by a soliton-induced waveguide. These solitons, which appear as a consequence of the vector nature of the two component system, are more stable than the previously found optical vortex-mode solitons and represent a new type of extremely robust nonlinear vector structure. \] Perhaps one of the most desirable goals of Optics is the development of purely optical devices in which light can be used to guide and manipulate light itself. This motivation explains the growing interest in self-guided beams (or spatial optical solitons) and the recent theoretical and experimental study of spatial solitons and their interactions . It is not only the reconfigurable and steerable soliton-induced waveguides created in a bulk medium that are of particular practical interest, but also a spatial soliton that guides another beam (of a different polarization or frequency). This may become a completely new object, a vector soliton, with an internal structure and new dynamical properties which yield surprising results for the stability of such an object even in simplest cases. Complex phenomena induced by the vector nature of nonlinear wave equations arise in many fields of Physics and are already firmly placed in the realm of condensed matter physics, the dynamics of biomolecules, and nonlinear optics. Recently an interest in these structures and the theoretical possibilities they offer has been renewed because of their experimental realization in different physical contexts. For instance, vector phenomena have been observed in Bose-Einstein condensation, with vortices in multicomponent condensates or in nontrivial topological defects due to interspecies interaction . Finally, vectorially active optical media are also being currently investigated because of many new characteristics they provide, as compared to scalar systems . In this Letter we study a vector object formed by two optical beams that interact incoherently in a bulk nonlinear medium. If the nonlinearity of the medium is self-focusing, an isolated beam, under proper conditions, will form a self-trapped state - a spatial optical soliton . Such a soliton changes the refractive index of the medium and creates a stationary effective waveguide. A second beam of a much lower intensity is subjected to the induced change of the refractive index and can be trapped as a localized mode of that waveguide. From linear optical waveguide theory, we expect that a radially symmetric waveguide can host different types of modes with more elaborate geometries \[Figs. 1(a-c)\]. However, at higher intensities of the trapped beam one must regard the two beams as components of a vector soliton, self-trapped by a self-consistent change of the refractive index induced by both beams. In this case we cannot treat the shapes of the beams as independent and it is not trivial to conclude whether we may obtain states which are a composition of a lowest-order state in one component and a high-order state in the other one. Recalling a previous work on the existence and stability of two-component one-dimensional solitons , one may expect to find at least two types of such complex objects in the two-dimensional case. The first type is the recently discussed two-dimensional vector soliton which has a node-less shape \[e.g., as shown in Fig. 1(a)\] in the first component and a vortex on the second one \[Fig. 1(b)\]. The second type introduced here maintains a similar shape for the first component, while the second beam develops a node along a certain direction \[Fig. 1(c)\] forming what we call a dipole-mode vector soliton. The purpose of this Letter is two-fold. First, we discuss the stability of the vortex-mode vector solitons and show that these objects are linearly unstable and can decay into dipole-mode vector solitons. Secondly, we prove that these dipole-mode solitons exist for an ample range of relative intensities of their components and show that they survive both small and large amplitude perturbations, their propagation dynamics resembling that of two spiraling beams . We would like to emphasize here that both results are highly nontrivial. While it is commonly believed that asymmetric solitary waves possess a higher energy, and should be a priori unstable, our results demonstrate that the opposite is true: An excited state with an elaborate geometry may indeed be more stable than a radially symmetric one and, as such, would be a better candidate for experimental realization. We stress here that the recently discovered method of creating multi-component spatial optical solitons in a photorefractive medium would allow a simple and direct verification of our theory, including the questions of soliton existence and stability. The outline of the Letter is as follows. First we formulate the model for our system. Next, we proceed with the study of vortex-mode vector solitons in the reasonable parameter range of the model. We recall previous studies on the issue of existence of such states and add new results arising from a linear stability analysis and numerical simulations of the dynamics of these solitons. Having concluded that the vortex-mode vector solitons are linearly unstable, we proceed with the study of the dipole variant. We obtain a continuous family of stationary solutions in the same parameter range. In this case, our tools for the linear stability analysis give no conclusive results, but numerical simulations of highly perturbed states show a periodic, stable evolution. Finally we conclude and summarize the implications of this work. The model.- We consider two incoherently interacting beams propagating along the direction $`z`$ in a bulk, weakly nonlinear saturable optical medium. The model corresponds, in the isotropic approximation, to the experimentally realized solitons in photorefractive materials. The problem is described by the normalized, coupled equations for the slowly varying beam envelopes, $`E_1`$ and $`E_2`$. The normalized dynamical equations for the envelopes of two incoherently interacting beams can, in this case, be approximately written in the form : $$i\frac{E_{1,2}}{z}+\mathrm{\Delta }_{}E_{1,2}+\frac{E_{1,2}}{1+|E_{1,2}|^2+|E_{2,1}|^2}=0,$$ (1) where $`\mathrm{\Delta }_{}`$ is the transverse Laplacian. Stationary solutions of Eqs. (1) can be found in the form $`E_1=\sqrt{\beta _1}u(x,y)\mathrm{exp}(i\beta _1z)`$, $`E_2=\sqrt{\beta _1}v(x,y)\mathrm{exp}(i\beta _2z)`$, where $`\beta _1`$ and $`\beta _2`$ are two independent propagation constants. Measuring the transverse coordinates in the units of $`\sqrt{\beta _1}`$, and introducing the ratio of the propagation constants, $`\lambda =(1\beta _2)/(1\beta _1)`$, from Eqs. (1) we derive a system of stationary equations for the normalized envelopes $`u`$ and $`v`$: $`\mathrm{\Delta }_{}uu+uf(I)=0,`$ (2) $`\mathrm{\Delta }_{}v\lambda v+vf(I)=0,`$ (3) where $`f(I)=I(1+sI)^1`$, $`I=u^2+v^2`$, and $`s=1\beta _1`$ plays the role of a saturation parameter. For $`s=0`$, this system describes the Kerr nonlinearity. In this paper we will work with intermediate values of saturation, around $`s=0.5`$. Vortex-mode solitons.- First, we look for radially symmetric solutions $`u(x,y)=u(r)`$, $`w(x,y)=w(r)\mathrm{exp}(im\varphi )`$, in which the second component carries a topological charge, $`m`$, and we assume that the $`u`$ component has no charge. In this case, Eqs. (2) take the form: $`\mathrm{\Delta }_\mathrm{r}uu+uf(I)=0,`$ (4) $`\mathrm{\Delta }_\mathrm{r}v(m^2/r^2)v\lambda v+vf(I)=0,`$ (5) where $`\mathrm{\Delta }_\mathrm{r}=(1/r)(d/dr)(rd/dr)`$. The fundamental, bell-shaped solutions with $`m=0`$ only exist at $`\lambda =1`$. In the remaining region of the parameter plane $`(s,\lambda )`$, the solutions carrying a topological charge ($`m=\pm 1`$) in the second component exist. Solutions of this type for the saturable nonlinearity are found and thoroughly investigated in , and in \- for the so-called threshold nonlinearity. The families of these radially symmetric, two-component vector solitons are characterized by a single parameter $`\lambda `$, and at any fixed value of $`s`$, the border of their existence domain is determined by a cut-off value, $`\lambda _c`$. A two-component trapped state exists only for $`\lambda >\lambda _c`$. Near the cutoff point this bell-shaped state can be presented as a waveguide created by the $`u`$component guiding a small-amplitude mode $`v`$. Away from the cut-off, the amplitude of the $`v`$component grows, and the resulting vector soliton develops a ring-like shape. An example of the ring-shape vortex-mode for our model is presented in Figs. 2(b). An important physical characteristic of vector solitons of this type is the total power defined as $`P=P_u+P_v=2\pi _0^{\mathrm{}}(u^2+v^2)r𝑑r`$, where the partial powers $`P_u`$ and $`P_v`$ are the integrals of motion for the model (1). The dependencies $`P(\lambda )`$, $`P_v(\lambda )`$, and $`P_u(\lambda )`$ for a fixed $`s`$ completely characterize the family of vector solitons \[a typical example is shown in Fig. 2(a)\] for $`\lambda >\lambda _c`$. Stability analysis.- Similar to the well studied case of the (1+1)-D vector solitons (see, e.g., Ref. ), the vortex-mode soliton is associated, in the linear limit, with a soliton-induced waveguide supporting a higher-order mode. It is therefore tempting to draw the analogy between the higher-order (1+1)-D two-hump solitons and (2+1)-D ring-shape solitons. Given the established stability of the multi-hump one-dimensional structure in a saturable medium , this line of thought would lead us (erroneously!) to conclude that the vortex-mode vector solitons in our model should be linearly stable. To show that the above conclusion is wrong, we have performed a linear stability analysis of the two-dimensional vortex-mode vector solitons. Our technique consists in linearizing Eqs. (1) around the vortex solution and evolving them with completely random initial conditions . Usually, the solution will be a linear combination of modes evolving with some real frequencies, $`\mu `$. However, if the linear equation bears modes with complex eigenvalues \[$`\mu =\mathrm{Re}(\mu )+\mathrm{iIm}(\mu )`$\], we expect an exponential growth of our random data, with convergence to the invariant space of one of these eigenvalues. This method allows us to extract the most unstable eigenvalue and its associated manifold in a similar way to the classical analysis of Lyapunov exponents in unstable systems. Our linear stability analysis has proved that, although saturation does have a strong stabilizing effect on the ring vector solitons , all vector solitons of this type are linearly unstable. In Fig. 3(a), where we plot a typical dependence of the eigenvalue $`\mu `$ of the most unstable mode on the soliton parameter $`\lambda `$ for a fixed $`s`$, we see that the growth rate of the instability tends to zero at the cut-off point of the vortex mode \[cf. Fig. 2(a) and 3(a)\]. This behaviour is consistent with the inherent stability of the fundamental scalar soliton in a saturable medium. Elsewhere the growth rate is positive and increases when the increasing intensity of the vortex mode. In Figs. 3(a-b) we compare the linear stability analysis with dynamical simulations of the vortex-mode soliton near cutoff, perturbed with random noise. The instability, although largely suppressed by saturation, triggers the decay of the soliton into a dipole structure \[as shown in Fig. 3(b)\] for even a small contribution of the charged mode. The dipole demonstrates astonishing persistence for large propagation distances as a rotating and radiating pulsar state. Dipole-mode solitons.- It is apparent from the above analysis that the dipole-mode vector soliton should be more stable than the vortex-mode soliton. We have identified the existence domain of these solitons by solving numerically the stationary equations (2) to find localized asymmetric solutions carrying a dipole mode \[as shown in Fig. 1(c)\] in the $`v`$component. One characteristic example of the dipole-mode soliton family is shown in Fig. 4(a) for a fixed $`s`$. Our linear stability analysis for these solutions does not converge to a particular value of $`\mathrm{Im}(\mu )`$. This indicates that either the eigenvalues of the unstable modes, if they exist, are extremely small, or that the unstable modes have shapes which are only too weakly excited by the random perturbations. To obtain further information on the dynamical stability of dipole-mode vector solitons, we have propagated numerically different perturbed dipole-mode solitons for distances up to several hundreds of $`z`$ units, or diffraction lengths. To put these numbers into physical perspective, we note that, in current experiments on solitons in photorefractive materials, the typical crystal length is $`20`$ $`mm`$, whereas $`z=100`$ in our model corresponds to the soliton propagation length $`2040`$ $`mm`$. We have performed two types of numerical experiments. First, we have found that small perturbations or random noise lead to bounded oscillations of the vector soliton, which retains its shape. This shows that the dipole-mode vector solitons should be stable enough for experimental observation. On the other hand, strong perturbations, such as a large disproportion of the humps of the dipole or a relative displacement of the components, may alter the shape of the dipole. Such a perturbation, in the process of evolution, is typically transfered from one component to the other in a robust, periodic way. It is clear from our simulations \[Fig. 5\] that in this case the dynamics is similar to that of two beams spiraling together but with initial zero angular momentum . From the evidence above it is clear that the unstable modes of a dipole vector soliton, if they exist, should be rare and hard to excite. Calculations of the linear stability spectrum around the exact soliton solution should provide a complete answer to the stability question, which will be the subject of our further research. Summing up the stability results, we can state that the dipole-mode vector solitons are extremely robust objects. According to our numerical tests, they have a typical lifetime of several hundred diffraction lengths and survive a wide range of perturbations. This is true even for vector states with a large power in the dipole component ($`P_w>P_u`$), i.e. in the parameter region which cannot be safely reached for vortex-mode solitons, since the latter become unstable much sooner. Finally, it is important to mention that there exist other physical models, where stable, radially asymmetric, dipole solitary waves play an important role in the nonlinear wave dynamics. All these models, however, possess scalar, or one-component, structures. The most famous examples are the Larichev-Reznik soliton, a localized solution of the Charney equation for Rossby waves , and the dipole Alfvén vortex soliton in an inhomogeneous plasma . Another type of dipole solitary waves is found as a two-soliton bound state in a non-local or anisotropic nonlocal media due to anomalous interaction of two solitons with opposite phase. Nevertheless, the physics behind all known dipole solitary waves and corresponding single-component nonlinear models differ drastically from the problem considered above. Therefore, the dipole-mode vector soliton we describe in this Letter is a genuinely new type of solitary waves in a homogeneous isotropic bulk medium, a phenomenon that may occur in many other physical applications. In conclusion, we have analyzed the existence and stability of radially symmetric and asymmetric higher-order vector optical solitons in a saturable nonlinear bulk medium, and predicted a new type of optical soliton associated with the dipole mode guided by a soliton-induced waveguide. We have demonstrated that solitons carrying a topological charge are linearly unstable and, as a consequence, they may decay into dipole-mode solitons. There is also strong evidence of the stability of these dipole-mode solitons. We believe that all the effects predicted in this Letter, including the existence and stability of solitons, can be easily verified in experiments with photorefractive media. J.J. G-R. thanks Optical Sciences Center, ANU for warm hospitality during his stay in Australia. V.M.P-G. and J.J.G-R. are partially supported by DGICYT under grant PB96-0534.
no-problem/9912/hep-ex9912039.html
ar5iv
text
# Particle - antiparticle asymmetries in the production of baryons in 500 GeV/𝑐 𝜋⁻-nucleon interactions ## Abstract We present the Fermilab E791 measurement of baryon - antibaryon asymmetries in the production of $`\mathrm{\Lambda }^{}`$, $`\mathrm{\Xi }`$, $`\mathrm{\Omega }`$ and $`\mathrm{\Lambda }_c`$ in 500 GeV/$`c`$ $`\pi ^{}`$-nucleon. Asymmetries have been measured as a function of $`x_F`$ and $`p_T^2`$ over the range $`0.12<x_F<0.12`$ and $`p_T^2<4`$ (GeV/$`c`$)<sup>2</sup> for hyperons and $`0.1<x_F<0.3`$ and $`p_T^2<8`$ (GeV/$`c`$)<sup>2</sup> for the $`\mathrm{\Lambda }_c`$ baryons. We observe clear evidence of leading particle effects and a basic asymmetry even at $`x_F=0`$. These are the first high statistics measurements of the asymmetry in both the target and beam fragmentation regions in a fixed target experiment. Particle - antiparticle asymmetry is an excess in the production rate of a particle over its antiparticle (or vice-versa). It can be quantified by means of the asymmetry parameter $$A=\frac{N\overline{N}}{N+\overline{N}},$$ (1) where $`N`$ ($`\overline{N}`$) is the number of produced particles (antiparticles). Measurements of this parameter show leading particle effects, which are manifest as an enhancement in the production rate of particles which have one or more valence quarks in common with the initial (colliding) hadrons, compared to that of their antiparticles which have fewer valence quarks in common. Other effects, such as associated production of meson and baryons can also contribute to a non-zero value of the asymmetry parameter. Leading particle effects in charm hadron production have been extensively studied in recent years from both the experimental and theoretical point of view . The same type of leading particle effects are expected to appear in strange hadron production. Although previous reports of global asymmetries in $`\mathrm{\Lambda }^{}`$, $`\mathrm{\Xi }`$ and $`\mathrm{\Omega }`$ hadroproduction already exist , there is a lack of a systematic study of light hadron production asymmetries. From a theoretical point of view, models which can account for the presence of leading particle effects in charm hadron production use some kind of non-perturbative mechanism for hadronization, in addition to the perturbative production of charm quarks . Given E791’s $`\pi ^{}`$ beam incident on nucleon targets, strong differences are expected in the asymmetry in both the $`x_F<0`$ and $`x_F>0`$ regions. In particular, as $`\mathrm{\Lambda }`$ (or $`\mathrm{\Lambda }_c`$) baryons are double leading in the $`x_F<0`$ region while both $`\mathrm{\Lambda }`$ ($`\mathrm{\Lambda }_c`$) and $`\overline{\mathrm{\Lambda }}`$ ($`\overline{\mathrm{\Lambda }_c}`$) are leading in the $`x_F>0`$ region, a growing asymmetry with $`\left|x_F\right|`$ is expected in the negative $`x_F`$ region and no asymmetry is expected in the positive $`x_F`$ region. $`\mathrm{\Xi }^{}`$ baryons are leading in both the positive and negative $`x_F`$ regions, whereas $`\mathrm{\Xi }^+`$ are not. Thus a growing asymmetry with $`\left|x_F\right|`$ is expected in this case. $`\mathrm{\Omega }^\pm `$ are both non leading, so no asymmetry is expected at all. Experiment E791 recorded data from 500 GeV/c $`\pi ^{}`$ interactions in five thin foils (one platinum and four diamond) separated by gaps of 1.34 to 1.39 cm. Each foil was approximately 0.4% of a pion interaction length thick (0.6 mm for the platinum foil and 1.5 mm for the carbon foils). A complete description of the E791 spectrometer can be found in Ref. . An important element of the experiment was its extremely fast data acquisition system which, combined with a very open trigger requiring a beam particle and a minimum transverse energy deposited in the calorimeters, was used to record a data sample of $`2\times 10^{10}`$ interactions. The E791 experiment reconstructed more than $`2\times 10^5`$ charm events and many millions of strange baryons. Hyperons produced in the carbon targets and with decay point downstream the SMD planes were kept for further analysis. $`\mathrm{\Lambda }^{}`$ were selected in the $`p\pi ^{}`$ and c.c. decay mode. All combinations of two tracks with an a priori Cherenkov probability of being identified as a $`p\pi ^{}`$ combination were selected for further analysis if the tracks have a distance of closest approach less than $`0.7`$ cm from the decay vertex. In addition, the invariant mass was required to be between $`1.101`$ and $`1.127`$ GeV/$`c^2`$, the ratio of the momentum of the proton to that of the pion was required to be larger than 2.5 and the reconstructed $`\mathrm{\Lambda }^{}`$ decay vertex must be downstream of the last target. The impact parameter must be less than $`0.3`$ cm if the particle decays within the first $`20`$ cm and 0.4 if decaying more than $`20`$ cm downstream of the target region. $`\mathrm{\Xi }`$’s were selected in the $`\mathrm{\Lambda }^{}\pi ^{}`$ and c.c. decay mode and at the same time $`\mathrm{\Omega }`$’s were selected in the $`\mathrm{\Lambda }^{}K^{}`$ and c.c. channel. Starting with a $`\mathrm{\Lambda }^{}`$ candidate, a third distinct track was added as a possible pion or kaon daugther. All three tracks were required to be only in the drift chamber region. Cuts for the daughter $`\mathrm{\Lambda }^{}`$ were the same as above, except for that on the impact parameter. The invariant mass for the three track combination was required to be between $`1.290`$ and $`1.350`$ GeV/$`c^2`$ and $`1.642`$ and $`1.702`$ GeV/$`c^2`$ for $`\mathrm{\Xi }`$ and $`\mathrm{\Omega }`$ candidates respectively. In addition, the $`\mathrm{\Xi }`$ and $`\mathrm{\Omega }`$ decay vertices were required to be upstream the $`\mathrm{\Lambda }^{}`$ decay vertex and downstream the SMD region. For $`\mathrm{\Omega }`$’s, the third track had to have a clear kaon signature in the Cherenkov counter. From fits to a gaussian and a linear background we obtained $`2,571,700\pm 3,100`$ $`\mathrm{\Lambda }^{}`$ and $`1,669,000\pm 2,600`$ $`\overline{\mathrm{\Lambda }}^{}`$ from approximately $`6.5\%`$ of the total E791 data sample, $`996,200\pm 1,900`$ $`\mathrm{\Xi }^{}`$ and $`706,600\pm 1,700`$ $`\mathrm{\Xi }^+`$ and $`8,750\pm 130`$ $`\mathrm{\Omega }^{}`$ and $`7,469\pm 120`$ $`\mathrm{\Omega }^+`$, these last four being from the total E791 data sample. The final data samples for hyperons are shown in Fig. 1. For charm baryons, the five targets were used. In most cases, $`\mathrm{\Lambda }_c`$’s decayed in air between the target foils, and before entering the silicon vertex detectors. All combinations of three tracks consistent with an a priori Cherenkov probability of being identified as a $`pK\pi `$ and c.c. combination were selected for further analysis if the distance between $`\mathrm{\Lambda }_c`$ decay vertex to the primary vertex was at least 5 standard deviations, and the invariant mass was between $`2.15`$ and $`2.45`$ GeV/$`c^2`$. To further enrich the sample, we required the $`\mathrm{\Lambda }_c`$ to decay at least five standard deviations downstream of the nearest target foil and between one and four lifetimes. The $`\mathrm{\Lambda }_c`$ momentum vector, reconstructed from its decay products, was required to pass within $`3\sigma `$ of the primary vertex. It was required that primary and secondary vertex had acceptable $`\chi ^2`$ per degree of freedom. We also required that at least two of the three $`\mathrm{\Lambda }_c`$ decay tracks be inconsistent with coming from the primary vertex. The final data sample, fitted to a gaussian plus a quadratic background has $`1,025\pm 45`$ $`\mathrm{\Lambda }_c^+`$ and $`794\pm 42`$ $`\mathrm{\Lambda }_c^{}`$. The invariant mass plot for the $`pK\pi `$ combination is shown in Fig. 2. For each baryon - antibaryon pair, an asymmetry both as a function of $`x_F`$ and $`p_T^2`$ was calculated by means of eq. 1. Values for $`N`$ ($`\overline{N}`$) were obtained from fits to the corresponding effective mass plots for events selected within specific $`x_F`$ and $`p_T^2`$ ranges. In all cases, well defined particle signals were evident. Efficiencies and geometrical acceptances were estimated using a sample of Monte Carlo (MC) events produced with the PYTHIA and JETSET event generators . These events were projected through a detailed simulation of the E791 spectrometer and then reconstructed with the same algorithms used for data. In the simulation of the detector, special care was taken to represent the behaviour of tracks passing through the deadened region of the drift chambers near the beam. The behavior of the apparatus and details of the reconstruction code changed during the data taking and long data processing periods, respectively. In order to account for these effects, we generated the final MC sample into subsets mirroring these behaviors and fractional contributions to the final data set. Good agreement between MC and data samples in a variety of kinematic variables and resolutions was achieved. We generated $`5`$ million of $`\mathrm{\Lambda }^{}`$, $`16.4`$ million of $`\mathrm{\Xi }`$, $`4.8`$ million of $`\mathrm{\Omega }`$, and $`7`$ million of $`\mathrm{\Lambda }_c`$ MC events. Sources of systematic uncertainties were checked in each case. For hyperons we looked for effects coming from changes in the main selection criteria, minimun transverse energy in the calorimeter required in the event trigger, uncertainties in the relative efficiencies for particle and antiparticle, effects of the $`2.5\%`$ $`K^{}`$ contamination in the beam, effects of $`K^{}`$ contamination in the $`\mathrm{\Lambda }^{}`$ sample, stability of the analysis for different regions of the fidutial volume and binning effects. For $`\mathrm{\Lambda }_c`$’s we checked the effect of varying the main selection criteria, the effect of the kaon contamination in the beam, the contamination of the data sample with $`D`$ and $`D_s`$ mesons decaying in the $`K\pi \pi `$ and $`KK\pi `$ modes and the parametrization of the background shape. Systematic uncertainties are small and negligible in comparison with statistical errors for the $`\mathrm{\Lambda }_c`$ asymmetry. However they are not for the hyperons, and are included in the error bars. Asymmetries in the corresponding $`x_F`$ ranges integrated over our $`p_T^2`$ and in the corresponding $`p_T^2`$ range integrated over our $`x_F`$ range are shown in Fig. 3 and 4 for hyperons and $`\mathrm{\Lambda }_c`$ baryons respectively, in comparison with predictions from the default PYTHIA/JETSET. We have presented data on hyperon and $`\mathrm{\Lambda }_c`$ production asymmetries in the central region for both $`x_F>0`$ and $`x_F<0`$. The range of $`x_F`$ covered allowed the first simultaneous study of the hyperon and $`\mathrm{\Lambda }_c`$ production asymmetry in both the negative and positive $`x_F`$ regions in a fixed target experiment. Our results show, in all cases, a positive asymmetry after acceptance corrections over all the kinematical range studied. Our results are consistent with results obtained by previous experiments . Our data shows that leading particle effects play an increasingly important role as $`\left|x_F\right|`$ increases. The non-zero asymmetries measured in regions close to $`x_F=0`$ suggest that energy thresholds for the associated production of baryons and mesons play a role in particle antiparticle asymmetries. On the other hand, the similarity in the $`\mathrm{\Lambda }^{}`$ and $`\mathrm{\Lambda }_c`$ asymmetries as a function of $`x_F`$ (see Fig. 5) suggest that the $`ud`$ diquark shared between the produced $`\mathrm{\Lambda }`$ baryons and Nucleons in the target should play an important role in the measured asymmetry in the $`x_F<0`$ region. However, one expects the $`\mathrm{\Lambda }_c`$ asymmetry to grow more slowly than the $`\mathrm{\Lambda }^{}`$ asymmetry due to the mass difference between the two particles. The PYTHIA/JETSET model describes only qualitatively our results, which in turn can be better described in terms of a model including recombination of valence and sea quarks already present in the initial (colliding) hadrons and effects due to the energy thresholds for the associated production of baryons and mesons .
no-problem/9912/astro-ph9912334.html
ar5iv
text
# DIRECT Distances to Local Group Galaxies ## 1. Distances to Local Group Galaxies The distances to nearby Local Group galaxies are not known very well, especially with the new demands of extragalactic stellar astrophysics which are only likely to increase with the advent of several 8-m class telescopes (e.g., Aparicio, Herrero, & Sanchez 1998). Nearby galaxies are also crucial calibrators to techniques for establishing the extragalactic distance scale and determining the value of H<sub>0</sub> (Mould et al. 1999). Two important spiral galaxies in our nearest neighbourhood are M31 and M33. Yet, their distances are now known to no better than 10-15%, as there are discrepancies of $`0.20.3\mathrm{mag}`$ between various distance indicators (e.g. Huterer, Sasselov & Schechter 1995; Holland 1998; Stanek & Garnavich 1998) (see Fig. 1). Direct distances to M31 and M33 are now achieveable, by the use of geometric techniques, with detached eclipsing binaries and Cepheids, and the use of 8$``$10-m class telescopes for the required spectroscopy. Such direct distances are the ultimate goal of project DIRECT, which we started three years ago (Kaluzny et al. 1998, 1999; Stanek et al. 1998, 1999). The identification of variables suitable for direct study was accomplished by massive photometry; much of it still continues. In this review I describe some results from the massive photometry by project DIRECT in M31 and M33. ## 2. Project DIRECT The DIRECT project team now consists of K. Z. Stanek (CfA), J. Kaluzny (Warsaw), A. H. Szentgyorgyi (CfA), J. L. Tonry (Hawaii), L. M. Macri (CfA), B. J. Mochejska (Warsaw), and myself. Between September 1996 and November 1999 we have obtained $`170`$ nights on the 1.2-meter FLWO telescope and 35 nights on the 1.3-meter MDM telescope for the project, 23 nights on the 2.1-meter KPNO telescope, and four nights on the Keck II 10-meter telescope. We have completely reduced and analyzed data for five of the fields in M31. We have found in these fields 410 variable stars: 48 eclipsing binaries, 206 Cepheids and 156 other variables; about 350 of these variables are newly discovered. We should stress here that for the first time detached eclipsing binaries were found in M31 by a CCD search. We are completing the reduction of the remaining fields and will continue submitting the next parts of M31/M33 variables catalogs for publication. In Figure 2 we show the fields observed in M33 by our project in 1996-97 (small rectangles) and 1998-99 (large rectangles). The moment our papers are submitted for publication, all the variable stars lightcurves and finding charts are made available through the anonymous ftp on cfa-ftp.harvard.edu, in pub/kstanek/DIRECT directory; and also through the WWW at the http://cfa-www.harvard.edu/~kstanek/DIRECT/. ## 3. Results on Cepheids So far we have found 206 Cepheids in M31 and 270 in M33. For two we have obtained Keck II LRIS spectra suitable for detailed abundance analysis and precise velocities. Our approach has been to obtain very well sampled light curves. This secures the identification of all classes of variable stars; allows meaningful Fourier decompositions for Cepheids to $`i=3`$, at least; and lets us deal with some issues of metallicity differences and blending in the Cepheid population. For example, pulsation resonances are known to play an important role in shaping the morphology of the light curves of Cepheids, e.g. the Hertzsprung bump progression centered on period of 10<sup>d</sup>, identified with the 2:1 resonance 2$`\omega _0\omega _2`$. Not surprisingly, it is very sensitive to the opacity in the Cepheid envelope, a crude sort of asteroseismology, and is a good indicator of the metallicity of a Cepheid sample (Buchler 1997). Thanks to data from the microlensing surveys the metallicity dependence of the resonance center has now been established from Cepheids in the Galaxy, LMC, and SMC (Beaulieu & Sasselov 1997). The resonance center is shifted by full 2 days for a $`\mathrm{\Delta }`$\[Fe/H\]=0.7. For such purposes the Fourier technique is entirely model and photometry independent, it only depends on well sampled light curves in an optical band. The two populations of Cepheids in our M31 data, which we expect to be of different metallicity, are located in two spiral arms. Although our $`inner`$ arm sample is still small (20), its resonance center is defined and shifted by $``$1 day in the fashion described above. The resonance troughs are filled with outliers in Fields B&C, due to the sample of high-metallicity Cepheids from the inner arm. The galactocentric distances of the two samples are 4 and 10 $`kpc`$, respectively. At these distances the abundances in the M31 disk are close to solar and slightly higher. ## 4. Correction for Blending We are using archival $`HST`$ images coinciding with our fields (see Fig. 2) to study the environments of the Cepheids discovered by the ground-based observations of project DIRECT. We find that roughly 50% of our Cepheids are blended with other stars which contribute more than 10% of the light (in V-band). Cepheid blending can account for up to 40% of the measured flux from some of the Cepheids as found in Mochejska et al. (1999), where we report on the sample of 20 Cepheids in M31. The analysis of 90 Cepheids in M33 is practically complete. We find that the average (median) V-band flux contribution from luminous companions which are not resolved on the ground-based images is about 19% (12%) of the flux of the Cepheid in M31 and 27% (17%) in M33. The average (median) I-band flux contribution is about 34% (24%) in M33, i.e. there appears to be no bias in the color of the blends. Our indirect distances to the two galaxies will be easily corrected for this blending. Note that Cepheid V7184 in Fig. 4 was discovered to be a blend by analysing our ground-based DIRECT light curve, before we had $`HST`$ data on it (Kaluzny et al. 1998). Cepheid V4954 was flagged accordingly. This shows the usefulness of good light curves, albeit limited to strong blending only. Our ground-based resolution in M31 and M33 corresponds to the HST resolution at about 10 Mpc. Blending leads to systematically low distances to galaxies observed with the HST, and therefore to systematically high estimates of H<sub>0</sub>. We predict the Cepheid blending effects for a galaxy at $``$25 Mpc observed by HST to be severe with the corresponding implications to the extragalactic distance scale. Acknowledgements. This work was supported in part by NSF grant AST-9970812. ## References Aparicio, A., Herrero, A., & Sanchez, F. (eds.) 1998, Stellar Astrophysics for the Local Group, VIII Can. Isl. Winter School, (New York: CUP) Beaulieu, J.P., & Sasselov, D. 1997, in Variables Stars and the Astrophysical Returns of the Microlensing Surveys, ed. R.Ferlet, J.-P. Maillard, & B.Raban (France: Editions Frontieres), 193 Buchler, J. R. 1997, in Variables Stars and the Astrophysical Returns of the Microlensing Surveys, (France: Editions Frontieres), 181 Holland, S. 1998, AJ, 115, 1916 Huterer, D., Sasselov, D., & Schechter, P. 1995, AJ, 100, 2705 Kaluzny, J., Stanek, K., Krockenberger, M., Sasselov, D., et al. 1998, AJ, 115, 1016 Kaluzny, J., Mochejska, B., Stanek, K., et al. 1999, AJ, 118, 346 Mochejska, B., Macri, L., Sasselov, D., & Stanek, K. 1999, AJ, submitted \[astro-ph/9908293\] Mould, J., Huchra, J., Freedman, W., Kennicutt, R., et al. 1999, preprint \[astro-ph/9909260\] Stanek, K., & Garnavich, P. 1998, ApJ, 503, L131 Stanek, K., Kaluzny, J., Krockenberger, M., et al. 1998, AJ, 115, 1894 Stanek, K., Kaluzny, J., Krockenberger, M., Sasselov, D., et al. 1999, AJ, 117, 2810
no-problem/9912/astro-ph9912060.html
ar5iv
text
# 1 Blazars and Their spectral energy distributions (SEDs) ## 1 Blazars and Their spectral energy distributions (SEDs) Blazars are radio-loud Active Galactic Nuclei characterized by polarized, highly luminous, and rapidly variable non-thermal continuum emission (Angel & Stockmann 1980) from a relativistic jet oriented close to the line of sight (Blandford & Rees 1978). As such, blazars provide fortuitous natural laboratories to study the jet processes and ultimately how energy is extracted from the central black hole. The radio through gamma-ray spectral energy distributions (SEDs) of blazars exhibit two broad humps (Figure 1). The first component peaks at IR/optical in “red” blazars and at UV/X-rays in their “blue” counterparts, and is most likely due to synchrotron emission from relativistic electrons in the jet (see Ulrich, Maraschi, & Urry 1997 and references therein). The second component extends from X-rays to gamma-rays (GeV and TeV energies), and its origin is less well understood. A popular scenario is inverse Compton (IC) scattering of ambient photons, either internal (synchrotron-self Compton, SSC; Tavecchio, Maraschi, & Ghisellini 1998) or external to the jet (external Compton, EC; see Böttcher 1999 and references therein). In the following discussion I will assume the synchrotron and IC scenarios, keeping in mind, however, that a possible alternative for the production of gamma-rays is provided by the hadronic models (proton-induced cascades; see Rachen 1999 and references therein). Red and blue blazars are just the extrema of a continuous distribution of SEDs. This is becoming increasingly apparent from recent multicolor surveys (Laurent-Muehleisen et al. 1998; Perlman et al. 1998), which find sources with intermediate spectral shapes, and trends with bolometric luminosity were discovered (Sambruna, Maraschi, & Urry 1996; Fossati et al. 1998). In the more luminous red blazars the synchrotron and IC peak frequencies are lower, the Compton dominance (ratio of the synchrotron to IC peak luminosities) is larger, and the luminosity of the optical emission lines/non-thermal blue bumps is larger than in their blue counterparts (Sambruna 1997). A possible interpretation is that the different types of blazars are due to the different predominant electrons’ cooling mechanisms (Ghisellini et al. 1998). In a simple homogeneous scenario, the synchrotron peak frequency $`\nu _S\gamma _{el}^2`$, where $`\gamma _{el}`$ is the electron energy determined by the competition between acceleration and cooling. Because of the lower energy densities, in lineless blue blazars the balance between heating and cooling is achieved at larger $`\gamma _{el}`$, contrary to red blazars, where, because of the additional external energy density, the balance is reached at lower $`\gamma _{el}`$. Blue blazars are SSC-dominated, while red blazars are EC-dominated. While there are a few caveats to this picture (Urry 1999), the spectral diversity of blazars’ jets cannot be explained by beaming effects only (Sambruna et al. 1996; Georganopoulos & Marscher 1998), but require instead a change of physical parameters and/or a different jet environment. ## 2 Correlated multiwavelength variability: Testing the blazar paradigm Correlated multiwavelength variability provides a way to test the cooling paradigm since the various synchrotron and IC models make different predictions for the relative flare amplitudes and shape, and the time lags. First, since the same population of electrons is responsible for emitting both spectral components (in a homogeneous scenario), correlated variability of the fluxes at the low- and high-energy peaks with no lags is expected (Ghisellini & Maraschi 1996). Second, if the flare is caused by a change of the electron density and/or seed photons, for a fixed beaming factor $`\delta `$ the relative amplitudes of the flares at the synchrotron and IC peaks obey simple and yet precise relationships (Ghisellini & Maraschi 1996; see however Böttcher 1999). Third, the rise and decay times of the gamma-ray flux are a sensitive function of the external gas opacity and geometry in the EC models (Böttcher & Dermer 1998). Fourth, the rise and decay times of the synchrotron flux depend on a few source typical timescales (Chiaberge & Ghisellini 1999). Fifth, spectral variability accompanying the synchrotron flare (in X-rays for blue blazars, in optical for red blazars) is a strong diagnostic of the electron acceleration versus cooling processes (Kirk, Riegler, & Mastichiadis 1998). When cooling dominates, time lags between the shorter and longer synchrotron wavelengths provide an estimate of the magnetic field $`B`$ (in Gauss) of the source via $`t_{lag}t_{cool}E^{0.5}\delta ^{0.5}B^{1.5}`$ (Takahashi et al. 1996; Urry et al. 1997). The role of RXTE. With its wide energy band coverage (2–250 keV), RXTE plays a crucial role in monitoring campaigns of blazars, since it probes the region where the synchrotron and Compton component overlap in the SEDs (Figure 1), allowing us to quantify their relative importance in the different sources. Its high time resolution and good sensitivity are ideal to detect the smallest X-ray variability timescales, study the lags between the harder and softer X-rays, and to follow the particle spectral evolution down to timescales of a few hours or less, pinning down the microphysics of blazars’ jets. ### 2.1 Results for red blazars One of the best monitored red blazars is 3C279. From the simultaneous or contemporaneous SEDs in Figure 1a, it is apparent that the largest variations are observed above the synchrotron peak in IR/optical (not well defined) and the Compton peak at GeV energies, supporting the synchrotron and IC models. The GeV amplitude is roughly the square of the optical flux during the earlier campaigns, supporting an SSC interpretation (Maraschi et al. 1994) or a change of $`\delta `$ in the EC models, while in 1996 large variations were recorded at gamma-rays but not at lower energies (Wehrle et al. 1998). During the latter campaign, the rapid decay of the GeV flare (Figure 2a) favors an EC model (Böttcher & Dermer 1998; Wehrle et al. 1998). Note in Figure 2a the good correlation, within one day, of the EGRET and RXTE flares, which provides the first evidence that the gamma-rays and X-rays are cospatial (Wehrle et al. 1998). Another candidate for future gamma-ray monitorings is BL Lac itself. In 1997 July it underwent a strong flare at GeV and optical energies (Bloom et al. 1997). The gamma-ray light curve shows a strong flare possibly anticipating the optical by up to 0.5 days; however, the poor sampling does not allow firmer conclusions. Contemporaneous RXTE observations showed a harder X-ray continuum (Madejski et al. 1999) than in previous ASCA measurements. The SED during the outburst is best modeled by the SSC model from radio to X-rays, while an EC contribution is required above a few MeV (Sambruna et al. 1999a). A similar mix of SSC and EC is also required to fit the SEDs of PKS 0528+134 (Sambruna et al. 1997; Mukherjee et al. 1998; Ghisellini et al. 1999). ### 2.2 Results for blue blazars Mrk 501, one of the two brightest TeV blazars, attracted much attention in 1997 April when it underwent a spectacular flare at TeV energies (Catanese et al. 1997; Aharonian et al. 1999; Djannati-Atai et al. 1999). This was correlated to a similarly-structured X-ray flare observed with RXTE (Figure 2b), with no delay larger than one day (Krawczynski et al. 1999). These results are consistent with an SSC scenario where the most energetic electrons are responsible for both the hard X-rays (via synchrotron) and the TeV (via IC). Figure 1b shows the SEDs of Mrk 501 during the 1997 April TeV activity, compared to the “quiescent” SED from the literature. An unusually flat (photon index, $`\mathrm{\Gamma }_X1.8`$) X-ray continuum was measured by SAX and RXTE during the TeV flare ( Pian et al. 1998; Krawczynski et al. 1999), implying a shift of the synchrotron peak by more than two orders of magnitude. This almost certainly reflects a large increase of the electron energy (Pian et al. 1998), or the injection of a new electron population on top a quiescent one (Kataoka et al. 1999a). Later RXTE observations in 1997 July found the source still in a high and hard X-ray state (Lamer & Wagner 1998), indicating a persistent energizing mechanism. An interesting new behavior was observed during our latest 2-week RXTE-HEGRA monitoring of Mrk 501 in 1998 June (Sambruna et al. 1999b), when 100% overlap between the X-rays and TeV light curves was achieved (Figure 3a). A strong short-lived ($``$ two days) TeV flare was detected, correlated to a flare in the very hard (20–50 keV) X-rays, with the softer X-rays being delayed by up to one day. As in 1997, large X-ray spectral variations are observed, with the X-ray continuum flattening to $`\mathrm{\Gamma }_X=1.9`$ at the peak of the TeV flare, implying a similar shift to $``$ 50 keV of the synchrotron peak (Figure 3b). However, while in 1997 the TeV spectrum hardened during the flare (Djannati-Atai et al. 1997), as it did in the X-rays, we did not observe significant variability in the TeV hardness ratios during the flare (Figure 3a, panel (e)); instead the spectrum softens 1-2 days later. The correspondence between the X-ray and TeV spectra is no longer present during the 1998 June flare. ## 3 Acceleration and cooling in blue blazars X-ray monitorings of blue blazars are a powerful diagnostic of physical processes occurring in these sources. This is because in these objects the X-rays are the high-energy tail of the synchrotron component where rapid and complex flux and spectral variability is expected depending on the balance between escape, acceleration, and cooling of the emitting particles (Kirk et al. 1998). An ideal target for X-ray monitorings is PKS 2155–304, one of the brightest X-ray blazars (Treves et al. 1989; Sembay et al. 1993; Pesce et al. 1998). Interest in this source was recently revived due to a TeV detection (Chadwick et al. 1999) during a high X-ray state (Chiappetti et al. 1999). ASCA and SAX observations detected strong X-ray variability, with the softer energies lagging the harder energies by one hour or less (Chiappetti et al. 1999; Kataoka et al. 1999b; Zhang et al. 1999), and are consistent with a model where the electron cooling dominates the flares. This implies magnetic fields of $`B0.10.2`$ Gauss (for $`\delta 10`$), similar to Mrk 421 (Takahashi et al. 1996). A new mode of variability was discovered during our RXTE monitoring of PKS 2155–304 in 1996 May, as part of a larger multifrequency campaign (Sambruna et al. 1999c). The sampling in the X-rays was excellent (Figure 4a), and complex flux variations were observed, with short, symmetric flares superposed to a longer baseline trend. Inspection of the hardness ratios (the ratio of the counts in 6–20 keV over the counts in 2–6 keV) versus flux shows that different flares (separated by vertical dashed lines in Figure 4a) exhibit hysteresis loops of opposite signs, both in a “clockwise” and “anti-clockwise” sense (labeled as C and A in Figure 4a, respectively). Applying a correlation analysis to each flare separately, we find that the C loops corresponds to a soft lag (softer energies lagging) and the A loop corresponds to a hard lag (harder energies lagging), of the order of a few hours in both cases. Two examples are shown in Figure 5b for the May 18.5–20.2 and 24.2–26.9 flares, respectively. We interpreted the data using the acceleration model of Kirk et al. (1998). Here loops/lags of both signs are expected depending on how fast the electrons are accelerated compared to their cooling time, $`t_{cool}`$. If the acceleration is instantaneous (i.e., $`t_{acc}<<t_{cool}`$), cooling dominates variability and, because of its energy dependence, the harder energies are emitted first, with C loops and soft lags. If instead the acceleration is slower ($`t_{acc}t_{cool}`$), the electrons need to work their way up in energy and the softer energies are emitted first, with A loops and hard lags predicted. A close agreement between the RXTE light curve and the model is found by increasing $`t_{acc}`$ by a factor 100 going from a C to an A loop, when $`t_{acc}`$ becomes similar to the duration of the flare, and by steepening the electron energy distribution (see Sambruna et al. 1999d for details). Thus we reach the important conclusion that we are indeed observing electron acceleration, and together with cooling this is responsible for the observed X-ray variability properties of PKS 2155–304. The complex spectral behavior of PKS 2155–304 in 1996 May is in contrast to the remarkable simplicity observed in earlier X-ray observations of this source (e.g., Kataoka et al. 1999b) and of other objects. Figure 5 summarizes two epochs of RXTE monitoring of another bright blue blazar, PKS 2005–489, which has an SED very similar to PKS 2155–304 and is a TeV candidate (Sambruna et al. 1995). During 1998 September, our RXTE monitoring detected a general trend of flux increase of 30% amplitude in 3.5 days (Figure 5a). Despite gaps in the sampling, it is clear that the variability at harder energies is faster than at softer energies, consistent with cooling dominating the flares. This is confirmed by the analysis of the hardness ratios versus flux, where only clockwise loops are observed. A consistent behavior was also observed one month later (Figure 5b) during a much larger, longer-lasting flare, when spectral variations occurred on timescales of a few hours (Perlman et al. 1999). ## 4 Summary and Future work Recent multiwavelength campaigns of blazars expanded the current available database, from which we are learning important new lessons. Detailed modeling of the SEDs of bright gamma-ray blazars of the red and blue types tend to support the current cooling paradigm, where the different blazars flavors are related to the predominant cooling mechanisms of the electrons at the higher energies (EC in more luminous sources, SSC in lower-luminosity ones). However, several observational biases could be present which can be addressed by future larger statistical samples, especially in gamma-rays. In particular, it will be important to expand the sample of TeV blazars, which currently includes only a handful (5) of sources, with only two bright enough to allow detailed spectral and timing analysis. A promising diagnostic for the origin of the seed photons for the IC process is the shape of the gamma-ray flare. This awaits well-sampled gamma-ray light curves, which will be afforded by the next higher-sensitivities missions (GLAST, AGILE in GeV and HESS, VERITAS, MAGIC, CANGAROO II in TeV). Broader-band, higher quality gamma-ray spectra will also be available, allowing a better location of the IC peak, a more precise measure of the spectral shape at gamma-rays, and its variability. More correlated X-ray/TeV monitorings are necessary, in which RXTE and SAX have crucial roles, to add to the current knowledge of the variability modes. Finally, the current data show that acceleration and cooling are the dominant physical mechanisms responsible for the observed variability properties from blazars’ jets. We are starting to study these processes well in the X-rays for blue blazars, where RXTE has the potential to determine the shortest X-ray variability timescales and lags, probing into even more detail the jets’ microphysics. It would also be interesting to perform similar studies in the optical for red blazars, to compare the nature of the acceleration and cooling in the two subclasses. ## Acknowledgements This work was supported by NASA contract NAS–38252 and NASA grant NAG5–7276. I thank Laura Maraschi for a critical reading of the manuscript, Felix Aharonian and the HEGRA team for allowing me to report the 1998 TeV data of Mrk 501, and Lester Chou for help with the RXTE data reduction.
no-problem/9912/quant-ph9912118.html
ar5iv
text
# A Fast and Compact Quantum Random Number Generator ## I Introduction Random numbers are a vital ingredient in many applications ranging, to name some examples, from computational methods such as Monte Carlo simulations and programming , over to the large field of cryptography for generating of crypto code or masking messages, as far as to commercial applications like lottery games and slot machines . Recently the range of applications requiring random numbers was extended with the development of quantum cryptography and quantum information processing . Yet a novelty is the application for which the random number generator presented in this paper was developed for: an experiment regarding the entanglement of two particles, a fundamental concept within quantum theory . Firstly this experiment demanded the generation of random signals with an autocorrelation time of $`<100`$ ns. Secondly, for the clarity of the experimental results, it was necessary that true (objective) randomness was implemented. The range of applications using random numbers has lead both to the development of various random number generators as well as to the means for testing the randomness of their output. Generally there are two approaches of random number generation, the pseudo random generators which rely on algorithms that are implemented on a computing device, and the physical random generators that measure some physical observable expected to behave randomly. Pseudo random generators are based on algorithms or even a combination of algorithms and have been highly refined in terms of repetition periods ($`2^{800}`$) and robustness against tests for randomness. But the inherent algorithmic evolution of pseudo random generators is an essential problem in applications requiring unpredictable numbers as the unguessability of the numbers relies on the randomness of the seeding of the internal state. Dependent on the intended application this can be a drawback. The requirements of our specific implementation were even such, that the use of a pseudo random number generator was in itself already ruled out by its deterministic nature. Physical random generators use the randomness or noise of a physical observable, such as noise of electronic devices, signals of microphones, etc. . Many such physical sources utilize the behavior of a very large and complex physical systems which have a chaotic, yet at least in principle deterministic, evolution in time. Due to the many unknown parameters of large systems their behavior is taken for true randomness. Still, purely classical systems have a deterministic nature over relevant time scales, and external influences into the random generator may remain hidden. Current theory implies that the only way to realize a clear and understandable physical source of randomness is the use of elementary quantum mechanical decisions, since in the general understanding the occurrence of each individual result of such a quantum mechanical decision is objectively random (unguessable, unknowable). There exists a range of such elementary decisions which are suitable candidates for a source of randomness. The most obvious process is the decay of radioactive nucleus ($`{}_{}{}^{85}\mathrm{Kr}`$, $`{}_{}{}^{60}\mathrm{Co}`$) which has already been used . However, the handling of radioactive substances demands extra precautions, especially at the radioactivity required by the switching rates of our envisaged random signals. Optical processes suitable as a source of randomness are the splitting of single photon beams , the polarization measurement of single photons, the spatial distribution of laser speckles or the light-dark periods of a single trapped ion’s resonance fluorescence signal . But only the first two of the mentioned optical processes are fast enough and, in addition, do not require an overwhelming technical effort in their realization. Thus we developed a physical quantum mechanical random generator based on the splitting of a beam of photons with an optical 50:50 beam splitter or by measuring the polarization of single photons with a polarizing beam splitter. ## II Theory of Operation The principle of operation of the random generator is shown in Figure 1. For the case of the $`50:50`$ beam splitter (BS) (Figure 1(a)), each individual photon coming from the light source and traveling through the beam splitter has, for itself, equal probability to be found in either output of the beam splitter. If a polarizing beam splitter (PBS) is used (Figure 1(b)), then each individual photon polarized at $`45^{}`$ has equal probability to be found in the H (horizontal) polarization or V (vertical) polarization output of the polarizer. Anyhow, quantum theory predicts for both cases that the individual “decisions” are truly random and independent of each other. In our devices this feature is implemented by detecting the photons in the two output beams with single photon detectors and combining the detection pulse in a toggle switch (S), which has two states, 0 and 1. If detector D1 fires, then the switch is flipped to state 0 and left in this state until a detection event in detector D2 occurs, leaving the switch in state 1, until a event in detector D1 happens, and S is set to state 0. (Figure 1(c)). In the case that several detections occur in a row in the same detector, then only the first detection will toggle the switch S into the corresponding state, and the following detections leave the switch unaltered. Consequently, the toggling of the switch between its two states constitutes a binary random signal with the randomness lying in the times of the transitions between the two states. In order to avoid any effects of photon statistic of the source or optical interference onto the behaviour of the random generator the light source should be set to produce $`1`$ photon per coherence time. ## III Realization of the Device Figure 2 shows the circuit diagram of the physical quantum random generator. The light source is a red light emitting diode (LED) driven by an adjustable current source (AD586 and TL081) with maximally $`110\mu \mathrm{A}`$. Due to the very short coherence length of this kind of source ($`<1`$ ps) it can be ascertained, that most of the time there are no photons present within the coherence time of the source, thus eliminating effects of source photon statistics or optical interference. The light emerging from the LED is guided through a piece of pipe to the beam splitter, which can be either a 50:50 beam splitter or a polarizing beam splitter. In the latter case the photons are polarized beforehand with polarization foil (POL) at $`45^{}`$ with respect to the axis of the dual channel polarization analyzer (PBS). The photons in the two output beams are detected with fast photo multipliers (PM1, PM2). The PMs are enclosed modules which contain all necessary electronics as well as a generator for the tube voltage, and thus only require a $`+12`$ V supply. The tube voltages can be adjusted with potentiometers (TV1, TV2) for optimal detection pulse rates and pulse amplitudes . The output signals are amplified in two Becker&Hickl amplifier modules (A) and transmitted to the signal electronics which is realized in emitter-coupled-logic (ECL). The detector pulses are converted into ECL signals by two comparators (MC1652) in reference to adjustable threshold voltages set by potentiometers, (RV1, RV2). The actual synthesis of the random signal is done within a RS–flip-flop (MC10EL31) as PM1 triggers the S–input and PM2 triggers the R–input of the flip-flop. The output of this flip-flop toggles between the high and low state dependent upon whether the last detection occurred in PM1 or PM2. Finally the random signal is converted from ECL to TTL logic levels (MC10EL22) for further usage. In order to generate random numbers on a personal computer the signal from the random generator is sampled periodically and accumulated in a 32 bit wide shift register (Figure 3). Every 32 clock cycles the contents of the shift register are transferred in parallel to a personal computer via a fast digital I/O board. In this way a continuous stream of random numbers is transferred to a personal computer. ## IV Testing the Randomness of the Device Up to now, no general definition of randomness exists and discussions still go on. Two reasonable and widely accepted conditions for the randomness of any binary sequence is its being “chaotic” and “typical”. The first of these concepts was introduced by Kolmogorov and deals with the algorithmic complexity of the sequence, while the second originates from Martin-Löv and says that no particular random sequence must have any features that make it distinguishable from all random sequences . With pseudorandom generators it is always possible to predict all of their properties by more or less mathematical effort, due to the fact of knowing their algorithm. Thus one may easily reject their randomness from a rigorous point of view. In contrast, the mostly desired feature of a true random generator, its “truth”, bears the principal impossibility of ever describing such a generator completely and proving its randomness beyond any doubt. This could only be done by recording its random sequence for an infinite time. One is obviously limited experimentally to finite samples taken out of the infinite random sequence. There are lots of empirical tests, mostly developed in connection with certain Monte Carlo simulation problems, for testing the randomness of such finite samples . The more tests one sample passes, the higher we estimate its randomness. We estimate a test for randomness the better, the smaller or more hidden the regularities may be that it can detect . As the range of tests for the randomness of a sequence is almost unlimited we must find tests which can serve as an appropriate measure of randomness according to the specific requirements of our application. Since the experiment that our random generators are designed for demands random signals at a high rate, we focus on the time the random generators take to establish a random state of its signal starting from a point in time where the output state and the internal state of the generator may be known. We will briefly describe the relatively intuitive tests that will be applied to data samples taken from the random generator, which we consider to be sufficient in qualifying the device for its use in the experiment. 1. Autocorrelation Time of the Signal: For a binary sequence as produced by our random generator the autocorrelation function exhibits an exponential decay of the form: $$A(\tau )=A_0\mathrm{e}^{2R|\tau |},$$ (1) where $`R`$ is the average toggle rate of the signal, $`A_0`$ is the normalization constant and $`\tau `$ is the delay time. Per definition the autocorrelation time is given by $$\tau _{ac}=\frac{1}{2R}.$$ (2) The autocorrelation function is a measure for the average correlation between the signal at a time $`t`$ and later time $`t+\tau `$. 2. Internal Delay within the Device: The internal delay time within the device between the emission of a photon and its effect on the output signal. This internal delay time is the minimal time the generator needs to establish a truly random state of its output. 3. Equidistribution of the Signal: This is the most obvious and simple test of randomness of our device, as for random generator the occurrence of each event must be equally probable. Yet, by itself the equidistribution is not a criterion for the randomness of a sequence. 4. Distribution of Time Intervals between Events: The transitions of the signal generated by our system are independent of any preceding events and signals within the device. For such a Poissonian process the time intervals between successive events are distributed exponentially in the following way: $$p(T)=p_0\mathrm{e}^{T/T_0},$$ (3) where $`p(T)`$ is the probability of a time interval $`T`$ between two events, $`T_0=1/R`$ is the mean time interval and is the reciprocal value of the average toggle rate $`R`$ defined earlier and $`p_0`$ is the normalization constant. The evaluation of $`p(T)`$ for a data sample taken from our generator shows directly for which time intervals the independence between events is ascertained and for which time intervals the signal is dominated by bandwidth limits or other deficiencies within the system. 5. Further Illustrative Tests of Randomness: These statistical tests will be applied to samples of random numbers produced by the random generator in order to illustrate the functionality of the device. For the application our random generators are designed for these statistical measures are not as important as the tests described above, and the tests proposed here represent just a tiny selection of possible test. Yet, these tests allow a cautious comparison of random numbers produced with our device with random numbers taken from other sources. The code for the evaluation evaluation of these tests was developed in . 1. Equidistribution and Entropy of $`n`$–Bit Blocks: Provided that the sample data set is sufficiently long, all possible $`n`$–bit blocks (where $`n`$ is the length of the block) should appear with equal probability within the data set. A direct, but insufficient, way of determining the equidistribution of a data set is to evaluate the mean value of all $`n`$–bit blocks, which should be $`(2^n1)/2`$. This will give the same result for any symmetric distribution. The distribution of $`n`$–bit blocks of a data set corresponds to the entropy, a value which is often used in the context of random number analysis. The entropy is defined as: $$H_n=\underset{n}{}p_i\mathrm{log}_2p_i$$ (4) and is expressed in units of bits. $`p_i`$ is the empirically determined probability for finding the $`i`$–th block. For a set of random numbers a block of the length $`n`$ should produce $`n`$ bits of entropy. In the case of bytes, which are blocks of 8 bits, the entropy of these blocks should be 8 bits. 2. Blocks of $`n`$ Zeros or Ones: Another test for the randomness of a set of bits is the counting of blocks of consecutive zeros or ones. Each bit is equally likely a zero as a one, therefore the probability of finding blocks of $`n`$ concatenated zeros or ones should be proportional to a $`2^n`$ function. 3. Monte-Carlo estimation of $`\pi `$: A pretty way of demonstrating the quality of a set of random numbers produced by a random generator is a simple Monte Carlo estimation of $`\pi `$. The idea is to map the numbers onto points on a square with a quarter circle fitted into the square and count the points which lay within the quarter circle. The ratio of the number of points lying in the circle and the total number of points is an estimation of $`\pi `$. ## V Operation of the Device Two random generators were each built into single width NIM-module (dimensions: $`25193\mathrm{c}\mathrm{m}^3`$) in order to match our existing equipment. The optical beam splitter, the two photo multipliers and the pulse amplifiers are mounted on a base-plate within the modules and the electronics is realized on printed circuit boards. The random generator modules require only a standard voltage supply of $`\pm 6`$ V and $`+12`$ V. The random signal generators were configured either with a 50:50 beam splitter or with a polarizing beam splitter as the source of randomness. In both cases they performed equally well. Yet, the polarization measurement of the photons offers the advantage of adjusting the division ratio of the photons in the two beams by slightly rotating the polarization foil sitting just in front of the beam splitter. The results presented here were all obtained from a random signal generator configured with the polarizing beam splitter. After warm up the devices require a little adjustment for maximum average toggle rate and equidistribution of the output signal. The average toggle rate of the random signals is checked with a counter and the equidistribution is checked by sampling the signal a couple of thousand times and counting the occurrences of zeros and ones. These measures are both optimized by trimming the reference voltage of the discriminators (RV1, RV2) and by adjusting the tube voltages of the photo multipliers (TV1, TV2). The maximum average toggle rate of the random signals at the output of the random number generators is $`34.8`$ MHz. Once the devices are set up in this way they run stably for many hours. Typically the PMs produce output pulses with an amplitude of maximally $`50`$ mV at a width of $`2`$ ns. The rise and fall time of the signals produced by the random number generators is $`3.3`$ ns. As it turns out, this limit is set by the output driver stage of the electronics. The transition times of the internal ECL signals was measured to be less then $`1`$ ns, which is in accordance with the specifications of this ECL logic. ## VI Performance of the Device The time delay between the emission of a photon from the light source and its effect on the output signal after running through the detectors and the electronics was measured by using a pulsed light source instead of the continuous LED and observing the electronic signals within the generator on an oscilloscope. The total time delay between a light pulse and its effect on the output was $`75`$ ns, and consists of $`20`$ ns time delay in the light source, light path and the detection, $`20`$ ns time delay in the amplifiers and cables and $`35`$ ns time delay in the main electronics. In order to evaluate the autocorrelation time of the signal produced by the device, signal traces consisting of 15000 points were recorded on a digital storage oscilloscope for three different average toggle rates. The sampling rate of the oscilloscope was $`500`$ MS/s for the toggle rates of $`34.8`$ MHz and $`26`$ MHz and $`250`$ MS/s for the toggle rate of $`16`$ MHz. The autocorrelation function for each trace was evaluated on a personal computer and the autocorrelation time $`\tau _{AC}`$ was extracted by fitting an exponential decay model to these functions. (Figure 4) The resulting autocorrelation times are $`11.8\pm 0.2`$ ns ($`14.4`$ ns) for the $`34.8`$ MHz signal, $`16.0\pm 0.6`$ ns ($`19.2`$ ns) for the $`26`$ MHz signal and $`30.7\pm 0.5`$ ns ($`31.3`$ ns) for the $`16`$ MHz signal which are comparable with the autocorrelation times calculated with expression (2) from the average toggle rate $`R`$, given in parenthesis. The time difference between successive toggle events of the random signal is measured with a time interval counter. The start input of the counter is triggered by the positive transition, and the stop input is triggered by the negative transition of the signal. Figure 5 shows the distribution of $`10^6`$ time intervals for a random signal with an average signal toggle rate of $`26`$ MHz. For times $`<3`$ ns the transition time of the electronics between the two logical states becomes evident as a cutoff in the distribution. For intervals of up to $`35`$ ns some wiggles of the distribution are apparent. This is most likely due to ringing of the signals on the transmission line. For times $`>35`$ ns the distribution approaches an exponential decay function. The spike at $`96`$ ns was identified as an artifact of the counter due to an internal time marker. As described earlier, our device can produce random numbers by periodically sampling the signal and cyclically transferring the data to a personal computer. Our personal computer (Pentium processor, $`120`$ MHz, $`144`$ MB RAM, running LabView on Win95) manages to register sets of random numbers up to a size of $`15`$ MByte in a single run at a maximum sample rate of $`30`$ MHz. In order to obtain independent and evenly distributed random numbers, the sampling period must be well above the autocorrelation time of the random signal. We observed that for a signal autocorrelation time of roughly $`20`$ ns a sampling rate of $`1`$ MHz suffices for obtaining “good” random numbers. All data samples used for the following evaluations consisted of $`8010^6`$ bits produced in continuous runs with a $`1`$ MHz bit sampling frequency. Figure 6(a) depicts the distribution of blocks with $`8`$ bit length within a data sample. This distribution approaches an even distribution, but still shows some non-statistical deviations, such as a peak at in the center and some symmetric deviations. Possibly this is due to a yet to high sampling rate and a slight misadjustment of the generator. The distribution of blocks of $`n`$ concatenated zeros and ones within a sample should be proportional to a $`2^n`$–function. (Figure 6(b)) The slopes of the logarithmically scaled distributions are measured with a linear fit and are $`0,29725\pm 0,00121`$ for the $`n`$–zero blocks and $`0.30299\pm 0.00138`$ in the case of the $`n`$–one blocks. Ideally, the slopes should both be equal to $`\mathrm{log}(2)=0,30103`$. The deviation can be understood as a consequence of minor differences in the probabilities of finding a zero or a one at the output of the generator, again due to misadjustment of the generator. The mean value of $`8`$-bit blocks, the entropy for $`8`$-bit blocks and the Monte Carlo estimation of $`\pi `$ are evaluated for a data sample produced by our random generator and compared to data samples taken from the Marsaglia CD–ROM and a sample data set built with the Turbo C++ random function . (Table I) The results in Table I are in favor of our device but the numbers must be treated with caution, as they represent only a comparison of single samples which may not be representative. ## VII Discussion and Outlook The experimental results presented in the chapter above gives strong support to the expectation, that our physical quantum random generator is capable of producing a random binary sequence with an autocorrelation time of $`12`$ ns and internal delay time of $`75`$ ns. This underlines the suitability of these devices for their use in our specific experiment demanding random signal generators with a time for establishing a random output state to be less than $`<100`$ ns, which is easily achieved by the physical quantum random generators presented in this paper. The high speed of our random generators is made possible by the implementation of state of the art technology using fast single photon detectors as well as high speed electronics. Moreover, the collection of tests applied to the signals and random numbers produced with our quantum random generator demonstrate the quality of randomness that is obtained by using a fundamental quantum mechanical decision as a source of randomness. Some methods for enhancing the performance, be it in terms of signal equidistribution and/or autocorrelation time, can be foreseen. For instance, a different method for generating the random signal would be that each of the PM’s toggles a $`\frac{1}{2}`$-divider which results in evenly distributed signals. These signals could be combined in an XOR-gate in order to utilize the quantum randomness of the polarization analyzer, but fully keeping the equidistribution of the signal. A reduction of the signal autocorrelation time is possible by optimizing the signal electronics for speed (e.g. using ECL signals throughout the design). Further, it is simple to parallelize several such random generators within one single device, as there is no crosstalk between the subunits, since the elementary quantum mechanical processes are completely independent and undetermined. Hence designing a physical quantum random number generator capable of producing true random numbers at rates $`>100`$ MBit/s or even above 1 GBit/s is a feasible task . We believe that random generators designed around elementary quantum mechanical processes will eventually find many applications for the production of random signals and numbers, since the source of randomness is clear and the devices operate in a straightforward fashion. ## VIII Acknowledgement This work was supported by the Austrian Science Foundation (FWF), project S6502, by the U.S. NSF grant no. PHY 97-22614, and by the APART program of the Austrian Academy of Sciences.
no-problem/9912/hep-th9912002.html
ar5iv
text
# Untitled Document HUTP-99/A066 hep-th/9912002 On the Uniqueness of Black Hole Attractors Martijn Wijnholt and Slava Zhukov Jefferson Laboratory of Physics, Harvard University Cambridge, MA 02138, USA Abstract We examine the attractor mechanism for extremal black holes in the context of five dimensional $`N=2`$ supergravity and show that attractor points are unique in the extended vector multiplet moduli space. Implications for black hole entropy are discussed. December 1999 1. Introduction BPS black holes in four and five dimensional $`N=2`$ supergravity have been much studied using the attractor mechanism. To construct a black hole solution, one specifies the charges and asymptotic values of the moduli. The moduli then evolve as a function of the radius until they reach a minimum of the central charge at the horizon. This minimum value determines the entropy of the black hole. Consequently, it is important to know if different values of the central charge can be attained at different local minima, or perhaps even more basically if multiple local minima are allowed. If uniqueness fails, one would be led to believe that the degeneracy of BPS states does not solely depend on the charges. As it turns out, in the five dimensional case we can show that at most one critical point of the central charge can occur; that is the subject of this paper. The structure of the argument is simple. Remarkably, the extended Kähler cone turns out to be convex. Therefore we can take a straight line between two supposed minima and analyse the (correctly normalised) central charge along this line. The central charge cannot have two minima when restricted to this line, yielding a contradiction. We would like to emphasize that although Calabi-Yau spaces will be in the back of our minds for most of this paper, the geometric statements have clear analogues in five dimensional supergravity, so that our arguments are independent of the presence of a Calabi-Yau. We will point out some of the parallel interpretations where they occur. In section two we provide two arguments for uniqueness in a single Kähler cone. In section three we tackle the extended Kähler cone, and in section four we discuss some of the implications. The reader may wish to start with section four before moving on to the arguments of sections two and three. 2. Single Kähler cone 2.1. Review of the attractor mechanism Let us recall the basic setting for the five-dimensional attractor problem. We consider M-theory compactified on a Calabi-Yau threefold. As the low-energy effective theory we obtain five dimensional $`N=2`$ supergravity with $`h^{1,1}1`$ vector multiplets, $`h^{2,1}+1`$ hypermultiplets and the gravity multiplet . The vector multiplets each contain one real scalar, so the vector moduli space is $`h^{1,1}1`$ dimensional. From the point of view of Calabi-Yau compactification these scalars have a simple geometrical interpretation. Let us denote the Calabi-Yau three-fold by $`X`$ and expand two-cycles $`Q`$ of $`X`$ as $`Q=Q_ie^i`$, where $`e^i`$ is a basis for $`H_2(X,𝐙)`$. The dual basis for $`H^2(X,𝐙)`$ will be written with lower indices, $`e_j`$, so that $`e_je^i=\delta _j^i`$. The Calabi-Yau Kähler class $`k`$ can be expanded as $`k=k^ie_i`$ which gives an $`h^{1,1}`$ dimensional space of parameters. One parameter corresponding to the total volume of the Calabi-Yau is part of the universal hypermultiplet and the rest of the parameters corresponding to the sizes of cycles in the Calabi-Yau describe precisely the moduli space of vector multiplets. The dynamics of vector multiplets in $`N=2`$ supergravity is completely governed by the prepotential $`F(k)`$, which is a homogeneous cubic polynomial in vector moduli coördinates $`k^i`$. It is a special property of five dimensional supergravity that there are no nonperturbative quantum corrections to this prepotential . Geometrically the prepotential is simply the volume of $`X`$ in terms of $`k`$ $$F(k)\frac{1}{6}_Xkkk=\frac{1}{6}k^ik^jk^ld_{ijl}$$ where $`d_{ijl}`$ denote the triple intersection numbers of $`X`$ $$d_{ijl}=_Xe_ie_je_l.$$ In order to abbreviate the formulae, let us introduce the following notation: $$\begin{array}{cc}\hfill ab& a_mb^m\hfill \\ \hfill abc& a^ib^jc^ld_{ijl}\hfill \\ \hfill k^3& kkk\hfill \end{array}$$ To decouple the universal hypermultiplet coördinate we need to impose the constraint $`F(k)=1`$ which gives us the vector moduli space as a hypersurface inside the Kähler cone. Alternatively, we will sometimes think of the moduli space as a real projectivisation of the Kähler cone. In this case we have to consider functions invariant under overall rescaling of $`k`$’s. The prepotential defines the metric on moduli space as well as the gauge coupling matrix for the five-dimensional gauge fields. Including the graviphoton, there are exactly $`h^{1,1}`$ $`U(1)`$ gauge fields in the theory and their moduli-dependent gauge coupling matrix is given by $$G_{ij}=\frac{1}{2}\frac{^2}{k^ik^j}\mathrm{log}F(k)=\frac{1}{2}\frac{^2}{k^ik^j}\mathrm{log}k^3$$ The moduli space metric $`g_{ij}`$ is just the restriction of $`G_{ij}`$ to the hypersurface $`F(k)=1`$. In the supergravity Lagrangian $`G_{ij}`$ and $`g_{ij}`$ multiply kinetic terms for the gauge fields and the moduli fields respectively. It is then very important that both metrics should be positive-definite inside the physical moduli space. The tangent space to the $`F(k)=1`$ hypersurface is given by vectors $`\mathrm{\Delta }k`$ such that $`kk\mathrm{\Delta }k=0`$. Then positivity of $`g_{ij}`$ requires $$\mathrm{\Delta }k^ig_{ij}\mathrm{\Delta }k^j\mathrm{\Delta }k^iG_{ij}\mathrm{\Delta }k^j=3k\mathrm{\Delta }k\mathrm{\Delta }k>0.$$ Next we consider BPS states with given electric charges. The vector of electric charges with respect to the $`h^{1,1}`$ $`U(1)`$ gauge fields can be thought of as an element $`Q`$ of $`H_2(X,𝐙)`$. In M-theory language these BPS states are M2-branes wrapped on a holomorphic cycle in the class $`Q`$. For large charges we can represent them in supergravity by certain extremal black hole solutions . The structure of these solutions is as follows. As one moves radially towards the black hole the vector multiplet moduli fields $`k^i`$ vary. They follow the gradient flow of the function $`ZQk`$: $$\begin{array}{cc}\hfill _\tau U& =+\frac{1}{6}e^{2U}Z\hfill \\ \hfill _\tau k^i& =\frac{1}{2}e^{2U}G^{ij}D_jZ\hfill \end{array}$$ Here $`UU(r)`$ is the function determining the five dimensional metric $$ds^2=e^{4U}dt^2+e^{2U}(dr^2+r^2d\mathrm{\Omega }^2),$$ $`\tau =1/r^2`$ and the covariant derivative is $$D_j=_j\frac{1}{6}k^ik^ld_{ijl}.$$ Geometrically $`Z`$ is just the volume of the holomorphic cycle $`Q`$ in the Calabi-Yau with Kähler class $`k`$. We will refer to $`Z(k)`$ as the central charge because when evaluated at infinity, $`Z`$ is indeed the electric central charge of the $`N=2`$ algebra. As we approach the horizon of the black hole at $`r=0`$ or $`\tau =\mathrm{}`$, the central charge rolls into a local minimum and the moduli stabilise there, let us call that point $`k_0`$. The area of the horizon and thus the entropy of the black hole are determined only by the minimal value of the central charge \[4,,5\] : $$S=\frac{\pi ^2}{12}Z_0^{3/2}=\frac{\pi ^2}{12}\left(_Qk_0\right)^{3/2}=\frac{\pi ^2}{12}(Qk_0)^{3/2}.$$ Those points in the moduli space where the central charge attains a local minimum for a fixed electric charge $`Q`$ are called attractor points. The microscopic count of the number of BPS states with given charge has been performed for the special case of compactifications of M-theory on elliptic Calabi-Yau threefolds . There the attractor point for any charge vector $`Q`$ was found explicitly and the resulting entropy prediction (2.1) agreed with the microscopic count for large charges. It was pointed out in that in the case of general Calabi-Yau compactifications an attractor point is not necessarily unique, and in principle for making an entropy prediction one needs to specify not only the charges of the black hole, but also an attractor basin, that is one needs to specify a region in moduli space in which all the points flow to a given attractor point along a path from infinity to the horizon. In the remainder of this article we show that if a minimum of the central charge exists, the attractor basin for this minimum covers the entire moduli space. There cannot be a second local minimum and so the specification of the charges of the black hole is sufficient for determining the attractor point. 2.2. Geometric argument To find an attractor point explicitly, one needs to extremise the central charge subject to the constraint $`kkk=1`$, which leads directly to the five dimensional attractor equation $$Q_i=(Qk)k^jk^ld_{ijl}$$ In differential form notation, it reads $$[Q]=\left(_Qk\right)[kk].$$ Here $`[Q]`$ is a four-form which is Poincaré dual to the two-cycle $`Q`$. For convenience, we will leave out the square brackets in what follows. Let us recall some standard facts about the Lefschetz decomposition (see for instance ). On any Kähler manifold the Kähler class is a harmonic form of type (1,1). It can therefore be used to define an action on the cohomology. We define the raising operator to be the map from $`H^{p,q}(X,𝐂)`$ to $`H^{p+1,q+1}(X,𝐂)`$ obtained by wedging with $`k`$ $$L_k\alpha =k\alpha $$ and similarly the lowering operator to be the map from $`H^{p,q}(X,𝐂)`$ to $`H^{p1,q1}(X,𝐂)`$ obtained by contracting with $`k`$ $$\mathrm{\Lambda }_k\alpha =\iota _k\alpha .$$ The commutator sends forms of type (p,q) to themselves up to an overall factor: $$[L,\mathrm{\Lambda }]=(p+qn)𝐈$$ where $`n`$ is the complex dimension of $`X`$. Thus $`L`$, $`\mathrm{\Lambda }`$ and $`(p+qn)𝐈`$ form an $`sl(2,𝐑)`$ algebra and the cohomology of $`X`$ decomposes as a direct sum of irreducible representations. When $`X`$ is a Calabi-Yau threefold, the decomposition is $$H^{}(X,𝐂)=1(\mathrm{𝟑}/\mathrm{𝟐})(h^{1,1}1)(\mathrm{𝟏}/\mathrm{𝟐})(2h^{2,1}+2)(\mathrm{𝟎}).$$ The spin 3/2 represenatation corresponds to $`\{1,k,k^2,k^3\}`$. There can be no spin 0 representations in $`H^{1,1}(X,𝐂)`$ because if $`\alpha `$ is of type (1,1) and $`L_k\alpha =0`$ then by equation (2.1) we deduce that $`\alpha `$ is zero. In particular, the raising operator $`L_k`$ maps classes of type (1,1) isomorphically onto classes of type (2,2). We will use this fact in the following argument.<sup>1</sup> We thank C. Vafa for this argument. To prove that a single Kähler cone supports at most one attractor point, assume to the contrary that there are two such points, $`k_0`$ and $`k_1`$, satisfying (2.1). Then we may rescale $`k_0`$ or $`k_1`$ by a positive factor such that $$k_0k_0=\pm k_1k_1.$$ First we fix the sign in the above equation. Since $`\frac{1}{2}(k^0+k^1)`$ is inside the Kähler cone, $`k^0+k^1`$ is an admissible Kähler class and $`_X(k^0+k^1)^3`$ is positive. Assuming the sign in (2.1) is minus, one may expand $`(k_0+k_1)^3`$ and deduce that $$(k_0+k_1)^3=2(k_0)^32(k_1)^3<0,$$ which is impossible, so the sign is a plus. Therefore we have $$(k_0+k_1)(k_0k_1)=0.$$ As discussed above, $`L_{k_0+k_1}`$ cannot annihilate any classes of type (1,1) because $`k^0+k^1`$ is an allowed Kähler class. We conclude that $`k_0k_1`$ must vanish. 2.3. Physical argument One doesn’t really need the attractor equation to prove that in a single cone multiple critical points cannot occur. Another argument makes use of the simple properties of the prepotential (2.1) and gives further insight into the behaviour of the central charge function. Let us examine the behaviour of the central charge along straight lines in the Kähler cone. First take any two points $`k_0`$ and $`k_1`$ in the Kähler cone. By convexity of the cone we can take a straight line from $`k_0`$ to $`k_1`$, $$k(t)=k_0+t\mathrm{\Delta }k,\mathrm{\Delta }k=k_1k_0.$$ For $`t`$ between $`0`$ and $`1`$ and a little bit beyond those values $`k(t)`$ certainly lies in the Kähler cone, but it no longer satisfies $`k(t)^3=1`$. To cure this we think of the moduli space as a real projectivisation of the Kähler cone and define the central charge everywhere in the cone by normalising $`k`$: $$Z(k)=\frac{_Qk}{(_Xkkk)^{1/3}}=\frac{Qk}{(k^3)^{1/3}}$$ Then the central charge along the straight line (2.1) is just $$Z(t)=\frac{Qk(t)}{(k(t)^3)^{1/3}}.$$ Let us also assume that the central charge is positive at $`k_0`$, i.e. $`Z(0)>0`$. Otherwise we would consider the same problem with the opposite charge. Differentiating $`Z(t)`$, one finds for the first derivative $$Z^{}(t)=(k^3)^{4/3}\left((Q\mathrm{\Delta }k)(k^3)(Qk)(\mathrm{\Delta }kkk)\right).$$ Now suppose $`Z(t)`$ has a critical point $`t_c`$ where $`Z^{}(t_c)=0`$. Then the second derivative at $`t_c`$ can be expressed as $$\begin{array}{cc}\hfill Z^{\prime \prime }(t_c)& =2(k^3)^{4/3}(Qk)\left(\frac{(\mathrm{\Delta }kkk)^2}{k^3}\mathrm{\Delta }k\mathrm{\Delta }kk\right)\hfill \\ & =2(k^3)^{4/3}(Qk)(B_{ij}\mathrm{\Delta }k^i\mathrm{\Delta }k^j).\hfill \end{array}$$ The bilinear form $`B_{ij}`$ has the following properties: exactly one of its eigenvalues is zero (namely in the $`k`$-direction) and the other eigenvalues are positive. In the language of Calabi-Yau geometry this holds<sup>2</sup> See for instance , page 123. Note the misprint there; it should say $`2k=p+q`$. because $`_Xk\mathrm{\Delta }k\mathrm{\Delta }k<0`$ for any $`\mathrm{\Delta }k`$ that satisfies $`_Xkk\mathrm{\Delta }k=0`$. Now if $`\mathrm{\Delta }k`$ would be proportional to $`k(t)=k_0+t\mathrm{\Delta }k`$ for some $`t`$, we would find that $`k_0`$ and $`k_1`$ are in fact equal. Thus $`\mathrm{\Delta }k`$ necessarily has a piece that is orthogonal to $`k(t)`$ and so $$B_{ij}\mathrm{\Delta }k^i\mathrm{\Delta }k^j>0\mathrm{strictly}.$$ In the language of supergravity the above statement follows from the fact that the form $`B`$ is proportional to the metric when restricted to directions tangent to the moduli space, for which $`kk\mathrm{\Delta }k=0`$, see (2.1). The zero eigenvalue in the $`k`$-direction is simply the scale invariance of $`Z(k)`$. The above inequality can be expressed in the following words: for positive central charge any critical point along any straight line is in fact a local minimum! For negative $`Z`$ every critical point along a straight line is a local maximum. It is well known that a critical point of the central charge is a minimum when considered as a function of all moduli. Here we have a much stronger statement. We see that on a one-dimensional subspace (a projection of a straight line) any critical point is in fact a local minimum. We can use the above observation as follows. The central charge $`Z`$ has a local minimum at the attractor point $`k_0`$ by definition. Therefore it must grow continuously on any straight line emanating from $`k_0`$ and can never achieve a second local minimum. Moreover, we see that the central charge has a global minimum at $`k_0`$ in the entire Kähler cone. Let us remark on another consequence of our observation. Consider level sets of the central charge function, i.e sets where $`Z<a`$ for some constant $`a`$. Note that all such sets are necessarily convex. For otherwise, if we could connect two points inside a level set by a line segment venturing outside it, there would be a maximum of the central charge on that line segment, which contradicts the above observation. 3. Extended Kähler cone 3.1. Review The single Kähler cone we have just discussed is only a part of the full vector moduli space.<sup>3</sup> See for discussion. Some of the boundaries of the Kähler cone correspond to actual boundaries of the moduli space. At other boundaries the Calabi-Yau undergoes a flop transition, that is a curve collapses to zero size, but one may continue through the wall and arrive in a different geometric phase. There one has another Calabi-Yau which is birationally equivalent to the original one. They share the same Hodge numbers but have different triple intersection numbers. In terms of the original Kähler moduli, the collapsed curve has a finite but negative area on the other side of the wall. In five dimensions this has the interpretation of a phase transition where a BPS hypermultiplet goes from positive to negative mass. The union of the Kähler cones of all Calabi-Yaus related to each other through a sequence of flop transitions is called the extended Kähler cone. Geometrically one cannot go beyond the boundaries of this extended cone. It has also been argued that at the boundaries that are at finite distance the physical vector moduli space ends . After we cross the wall into an adjacent cone we may take linear combinations of Kähler parameters $`k^i`$ in order to get an acceptable set of moduli that yield positive areas for two- and four-cycles. But we will find it more convenient to stick to the original $`k^i`$, even though they can sometimes yield negative areas outside the original cone. By induction, we may still use the $`k^i`$ if we pass through a second flop transition into a third cone, and so on. The Calabi-Yau in the adjacent cone has different intersection numbers, which means that we have to adjust the prepotential for the new cone. It is well known how the prepotential changes when one passes through a wall: if we denote by $`m`$ the area of the collapsing curve that is negative on the other side of the wall, then $$kkkkkk(\mathrm{\#}𝐏^1)m^3.$$ Here $`\mathrm{\#}𝐏^1`$ stands for the number of $`𝐏^1`$’s shrinking to zero size at the wall. Intuitively, the growing curve should contribute a positive number to the volume of the Calabi-Yau for $`m<0`$, hence the minus sign in (3.1). This sign is crucial for proving uniqueness of attractor points in the extended Kähler cone. Physical quantities experience only a mild change at the flop transition . In particular from (3.1) we see that the prepotential is twice continuously differentiable. The central charge $`Z`$ is also twice continuously differentiable. The metric, which involves second derivatives of the prepotential, is only continuous. 3.2. Convexity of the extended cone The extended Kähler cone has an alternative description in terms another cone, as we will explain below. The advantage of this description comes from the fact that this other cone is manifestly convex, hence so is the extended Kähler cone<sup>4</sup> We would like to thank D. Morrison for pointing this out to us.. We will only give a brief sketch of the argument here and simply use the result in the remainder of the paper. For a detailed proof, one may consult the mathematics literature (see also ). There is a one to one correspondence between real cohomology classes of type (1,1) and line bundles. Namely, given such a class $`[\omega ]`$, one may find a line bundle $`L_{[\omega ]}`$ such that its first Chern class is $`[\omega ]`$, and conversely. In order to apply some standard constructions in algebraic geometry, we will assume that $`[\omega ]`$ is a rational class, that is it is a class in the intersection of $`H^{1,1}(X,𝐂)`$ and $`H^2(X,𝐐)`$. With appropriate restrictions, some high tensor power of $`L_{[\omega ]}`$ will have sufficiently many holomorphic sections to define a ‘good’ map to some projective space, as follows: choose a basis of holomorphic sections $`s_0,s_1,\mathrm{},s_n`$. Then we get a map $`f_{[\omega ]}`$ from $`X`$ to $`𝐏^n`$ by sending a point $`p`$ in $`X`$ to the equivalence class $`[s_0(p),s_1(p),\mathrm{},s_n(p)]`$. Let us call the image $`Y`$. Some points $`p`$ may be a common zero for all the $`s_i`$’s. The collection of such points is called the base locus of $`L_{[\omega ]}`$. Under $`f_{[\omega ]}`$ the base locus is mapped to the origin in $`𝐂^{n+1}`$, so when one projectivises $`𝐂^{n+1}`$ cycles in the base locus may get contracted and points may get smeared out. Away from the base locus the map $`f_{[\omega ]}`$ is an isomorphism. In order to insure that the image will be a Calabi-Yau that is related to $`X`$ by flop transitions at most, we require that $`[\omega ]`$ is movable, which means that the base locus is of complex codimension at least two in $`X`$. This condition means that the map $`f_{[\omega ]}`$ is an isomorphism in codimension one, i.e. the most that can happen is that some two-cycles contract or some points expand to two-cycles. In particular, the canonical class of $`Y`$ must be trivial, so $`Y`$ is a Calabi-Yau. Finally, by construction the holomorphic sections of (some multiple of) $`L_{[\omega ]}`$ get transformed into hyperplane sections, which are the sections of the line bundle corresponding to the Kähler class on $`Y`$. So the pull-back of the Kähler class on $`Y`$ is precisely (some multiple of) $`[\omega ]`$. It is well known that when $`[\omega ]`$ is taken to be in the original Kähler cone of $`X`$, $`f_{[\omega ]}`$ will give a smooth embedding of $`X`$ in projective space. Hopefully we have made it plausible that for any rational class $`[\omega ]`$ of type (1,1) on $`X`$, provided it is movable, we can find a Calabi-Yau $`Y`$ which has Kähler class (a multiple of) $`[\omega ]`$ and is related to $`X`$ by flop transitions. Conversely, given a Calabi-Yau $`Y`$ with rational Kähler class $`[\varpi ]`$ and related to $`X`$ by flops, it can been shown that the transform of $`[\varpi ]`$ on $`X`$ is movable. Therefore the extended rational Kähler cone is precisely the rational cone generated by movable classes of type (1,1). We want to discuss why the property of movability is preserved under positive linear combinations. To see this, take two classes $`[\omega _1]`$ and $`[\omega _2]`$ and consider the sum $`[\omega ]=m[\omega _1]+n[\omega _2]`$ where $`m`$ and $`n`$ are positive rational numbers. Since we may multiply $`[\omega ]`$ by any integer, we may assume that $`m`$ and $`n`$ are themselves integer and moreover large enough for the following argument to apply. The corresponding line bundle is $$L_{[\omega ]}=L_{[\omega _1]}^{}{}_{}{}^{m}L_{[\omega _2]}^{}{}_{}{}^{n}.$$ Take a basis of holomorphic sections $`s_i`$, $`i=0\mathrm{}n_1`$, for $`L_{[\omega _1]}^{}{}_{}{}^{m}`$. This defines a map to $`𝐏^{n_1}`$. Similarly choose basis $`t_j`$, $`j=0\mathrm{}n_2`$, for $`L_{[\omega _2]}^{}{}_{}{}^{n}`$. Then $`L_{[\omega ]}`$ defines a map to $`𝐏^{n_1n_2+n_1+n_2}`$ by means of the sections $`s_it_j`$. Thus the base locus of $`L_{[\omega ]}`$ is the union of the base loci of $`L_{[\omega _2]}^{}{}_{}{}^{m}`$ and $`L_{[\omega _2]}^{}{}_{}{}^{n}`$, and in particular is of codimension at least two. So we conclude that the cone generated by movable classes is convex, and therefore that the extended Kähler cone is convex. 3.3. Uniqueness in the extended cone Armed with the knowledge that the extended Kähler cone is convex, we may try to employ the argument that was used successfully in section two. We start with the local minimum of the central charge at the attractor point $`k_0`$. Then on straight lines emanating from $`k_0`$ the critical points of $`Z`$ are all minima as long as they are inside some Kähler cone. However, a critical point may lie on the boundary between two cones in which case our argument that it must be a minimum doesn’t apply. Recall that we needed the form $`B_{ij}`$ to be positive in the direction of the line. This followed from the physical requirement of the positivity of the metric inside the moduli space, so that $`B_{ij}`$ is positive-definite for all directions tangent to the moduli space. But continuity of the metric alone does not prevent it from acquiring a zero eigenvalue on the flopping wall and so in principle $`B_{ij}`$ may become degenerate there. We are not aware of a proof of nondegeneracy of the metric (or a counterexample). Thus it may be possible for a critical point of the central charge along a straight line to have vanishing second derivative when it lies on the wall between two Kähler cones. Depending on the details of the behaviour of $`Z^{}`$ such a critical point may be a local minimum, maximum or an inflection point. By the same token, it may even be that an attractor point $`k_0`$ is not a local minimum of the central charge on the extended moduli space, but rather a saddle point. We therefore switch to a more direct argument. We will first consider the case when $`k_0`$ itself is not on the wall and consequently it is a local minimum. Let us again examine the central charge function along a straight line (2.1) between an attractor point $`k_0`$ and some other point $`k_1`$ inside the extended moduli space. If $`k_1`$ were another attractor point, then it would also be a critical point of $`Z(t)`$ at $`t=1`$, which is why we will be looking at the critical points of $`Z(t)`$. As we know, the only source of trouble are the points where our line crosses walls of the Kähler cone. Let us first consider the case where only a single wall is crossed between $`t=0`$ and $`t=1`$. Suppose that the intersection is at $`t=t_f`$. Rather than looking at the central charge $`Z(t)`$ itself, we will examine its derivative. The derivative of $`Z`$ was $$Z^{}(t)=(k^3)^{4/3}\left((Q\mathrm{\Delta }k)(k^3)(Qk)(\mathrm{\Delta }kkk)\right).$$ The cubic terms in the second term of $`Z^{}`$ cancel, so we put $$Z^{}(t)=(k^3)^{4/3}R(t)$$ where $`R(t)`$ is a polynomial of degree two. As $`(k^3)^{4/3}`$ is nonnegative, we will focus on $`R(t)`$. By assumption $`t=0`$ is a local minimum for $`Z`$, i.e. it is a root for $`R(t)`$ where $`R^{}>0`$. Recall that at the attractor point we start with a positive value of the central charge. It is important that for positive $`Z`$ the only physical root of $`R(t)`$ is the one where the first derivative is positive or possibly zero if such a root is on the flopping wall. The other root of $`R(t)`$ is not physical, i.e. it lies outside the original Kähler cone. Thus $`Z(t)`$ has only one critical point in a cone, which is another proof of uniqueness for a single cone. Now we will see what happens in the adjacent cone. First we need to know the point where $`Z(t)Qk(t)`$ vanishes, call it $`t_0`$. With this definition we may write $$Qk(t)=Qk_0+tQ\mathrm{\Delta }k=Qk_0(1t/t_0)$$ When we cross the flopping wall at $`t=t_f`$ the prepotential changes as (3.1) $$k^3k^3+c(tt_f)^3$$ where $`c>0`$. So we must also modify $`Z^{}(t)`$ for $`t>t_f`$: $$\begin{array}{cc}\hfill Z^{}(t)|_{t>t_f}& =(k^3+c(tt_f)^3)^{4/3}\left(R(t)+c(Q\mathrm{\Delta }kQk_0)(tt_f)^2\right)\hfill \\ & =(k^3+c(tt_f)^3)^{4/3}\left(R(t)+cQk_0\left(\frac{t_f}{t_0}1\right)(tt_f)^2\right)\hfill \\ & =(k^3+c(tt_f)^3)^{4/3}P(t).\hfill \end{array}$$ We are interested in the physical roots of $`P(t)`$ for $`tt_f`$. As long as $`Z(t)`$ is positive those are the roots where $`P^{}0`$. $`Z(t)`$ is positive for all $`t>0`$ if $`t_0<0`$ while if $`t_0>0`$ it is only positive for $`t<t_0`$. In both cases the constant $`AcQk_0\left(\frac{t_f}{t_0}1\right)`$ is negative. First we show that for $`A<0`$ there are no physical roots of $`P(t)`$ for $`tt_f`$. For that we simply find the root $`t_r`$ where $`P^{}0`$ and show that $`t_r<t_f`$. It will also imply that $`Z(t)`$ may not start decreasing and therefore cannot become negative inside the extended moduli space. We write $`R(t)=at^2+bt`$ where $`b>0`$ since $`t=0`$ is a minimum of $`Z`$. Then we need to find roots of the quadratic equation $$P(t)=at^2+bt+A(tt_f)^2=0$$ The root where the derivative $`P^{}(t)`$ is nonnegative, if it exists, is always given by $$t_r=\frac{2At_fb+\sqrt{b^2+4t_f(A)(b+at_f)}}{2(a+A)}$$ Then there are four cases to consider depending on the value of the coefficient $`a`$. 1. $`ab/t_f<0`$, note that $`a+A<0`$ and $`b+at_f<0`$, see Fig. 1. In this case there may be no roots, but if there are, we have $$t_r<\frac{2At_fb}{2(a+A)}\frac{2At_f+at_f}{2(a+A)}<t_f.$$ This case is not physical, because the second root of $`R(t)`$ is inside the original Kähler cone which gives unphysical maximum of the central charge. We include this case for future reference. Fig. 1 2. $`b/t_f<a<A`$, still $`a+A<0`$ but $`b+at_f>0`$. Now there are always roots and we have $$\begin{array}{cc}\hfill t_r& <\frac{2At_fb+\sqrt{b^2+4t_fa(b+at_f)}}{2(a+A)}\hfill \\ & =\frac{2At_fb+|b+2at_f|}{2(a+A)}\hfill \\ & \frac{2At_fb+b+2at_f}{2(a+A)}\hfill \\ & =t_f.\hfill \end{array}$$ 3. $`a=A>0`$. In this case $`P(t)`$ is linear. It has only one root, which satisfies $$t_r=\frac{at_f^2}{b+2at_f}<t_f.$$ 4. $`A<a`$, see Fig. 2. As in item $`2`$ we replace $`A`$ by $`a`$ inside the square root. The resulting equations are exactly the same as in (3.1). Fig. 2. What we have found is that $`P(t)`$ doesn’t have physical roots in the new Kähler cone. This means that $`Z(t)`$ doesn’t have critical points there and therefore continues to grow as we go away from the attractor. It follows that we need not consider the case when the central charge is negative. Crossing several walls can now be handled by induction. The polynomial $`P(t)`$ after the wall plays the role of $`R(t)`$ for the next crossing. We have shown that $`P(t)`$ has a root where $`P^{}>0`$, but it lies to the left of the new wall, therefore we are in the same situation as we started. Let us comment on the (im)possibility of the critical point of $`Z(t)`$ which lies on the flopping wall and is a local maximum. It would correspond to the situation where $`R`$ and $`R^{}`$ both vanish (recall that $`R^{}`$ cannot be negative at the critical point by continuity and positivity of the metric away from the flopping wall). Such a point can be described by a situation in item 4 above with $`b=0`$, $`t_f=0`$ and both roots of $`R(t)`$ at $`0`$. As $`R(t)`$ is positive to the left of the wall and $`P(t)`$ is negative to the right of it, this critical point is a local maximum. However, $`R(t)`$ cannot have a double zero at $`t_f`$ because we have shown that it always has one root strictly to the left of the wall. Therefore such critical points do not arise. Finally, we are left to consider the case when $`k_0`$ itself lies on the flopping wall. Then $`P(t)`$ may be either positive or negative to the right of the wall. In the former case $`Z(t)`$ starts growing for $`t>0`$ and the analysis we have made earlier carries over with no changes. In the latter case $`k_0`$ may be a local maximum in some directions precisely as described in the previous paragraph. The difference is that we now begin in this situation and cannot argue that $`R(t)`$ does not have a double root on the wall. In such a case the central charge decreases from $`t=0`$ and it may eventually become negative. While it is still positive the first wall crossing is described essentially by the situation in item 1 above, with $`b=0`$. It is then clear that $`P(t)`$ will have no roots after all subsequent wall crossings while $`Z(t)>0`$. Moreover, in every cone $`P(t)`$ will be a downward-pointing parabola with the apex to the left of the left wall. After the central charge becomes negative the discussion changes in two ways. First, when crossing the wall the constant $`A`$ in the change from $`R(t)`$ to $`P(t)`$ becomes positive: $$R(t)P(t)=R(t)+A(tt_f)^2,A>0.$$ And second, the physical critical points are now the roots of $`R(t)`$ where the first derivative is negative or possibly zero if the root is on the flopping wall. Now, $`Z(t)`$ is decreasing when it becomes negative, therefore the first critical point after that may only be on the wall of some cone such that $`R(t)`$ has a double zero. If we assume that this critical point exists, right before it $`R(t)`$ would be a downward pointing parabola with a double zero on the wall. We can move backwards from it and reconstruct the polynomials $`R(t)`$ and $`P(t)`$ in all the preceding cones. Taking into account that the constant $`A`$ is now positive but has to be subtracted, we see that while $`Z`$ is negative $`R(t)`$ in every cone is a downward pointing parabola with no roots and apex to the right of the right wall. In the cone where $`Z`$ crosses zero we obtain a conradiction with the previous analysis where we have found that the apex of the parabola should be to the left of the left wall. Therefore, there cannot be multiple attractor points even when they lie on the walls between Kähler cones. 4. Discussion In this paper we have demonstrated that a critical point of the central charge $`Z`$ is unique if it exists. Moreover, if $`Z`$ has a minimum (maximum) at the critical point then it will grow (decrease) along straight lines emanating from the critical point. In this section we will discuss two implications of our result. If one restricts the moduli to lie inside a single Kähler cone then uniqueness is not surprising. The reason for this derives from the microscopic interpretation of entropy: it should be possible to reproduce the entropy of a BPS black hole by a microscopic count of degenerate BPS states. For the Calabi-Yau black holes considered in this paper, we would have to count the degeneracy<sup>5</sup> See for a discussion of the correct quantity to consider. of holomorphic curves within the class specified by the charge vector. As we have seen in section two, the macroscopic entropy predicted by the attractor mechanism is $`S=\frac{\pi ^2}{12}Z_0^{3/2}`$. Thus one expects that the degeneracy of BPS states for large charges, when supergravity should give a good description, asymptotically approaches $`e^{\frac{\pi }{24}Z_0^{3/2}}`$. In the count was done for the special case of elliptic threefolds. The attractor equation in that case could be solved explicitly and the attractor point was therefore unique (at least in a single Kähler cone). But even for a general Calabi-Yau one should not have expected multiple attractor points to occur in a single Kähler cone. Supergravity is well-behaved when the moduli vary only over a single cone and so the existence of two black hole solutions with different entropy should have its origin in the possibility of counting different BPS state degeneracies. But the number of holomorphic curves does not change inside a Kähler cone, so neither should the number of BPS states. So at least inside a single Kähler cone, it is clear that the entropy should be completely fixed by specification of the charges of the black hole. In the extended moduli space however this is not so clear: the number of curves does change as one crosses a flopping wall. One could therefore interpret the walls of a Kähler cone as a hypersurface of marginal stability, analogous to the curve of marginal stability in Seiberg-Witten theory. The puzzle is this: suppose one starts with asymptotic moduli at some point very far away from the attractor point. Then somehow the attractor point seems to be aware of the degeneracy of BPS states for the Calabi-Yau associated with the asymptotic moduli. But when we choose the asymptotic moduli in a cone that is different from the cone where the attractor point lies, the degeneracies in the two cones will in general not be the same, so there is no a priori reason for the absence of multiple critical points. In the light of our result, one possibility is that the number of curves changes only very mildly across a transition, mild enough so that the asymptotic degeneracy in the limit of large charges is not affected. It would be interesting to check this mathematically. The existence of multiple attractor points was also thought to be desirable for the construction of domain walls in five dimensional $`N=2`$ gauged supergravity, along the lines of . In that setup the goal is not to minimise the central charge, but to find extrema of the scalar potential of gauged supergravity, which is $$V=6(W^2\frac{3}{4}g^{ij}_iW_jW).$$ In the above, $`W=Q_ik^i`$ where $`k^i`$ are the usual Kähler moduli and $`Q_i`$ are the gravitino and gaugino charges under the $`U(1)`$ that is being gauged. Even though the interpretation is different, $`W`$ is numerically the same as what we have called $`Z`$ before and the supersymmetric critical points of $`V`$ are also critical points of $`W`$ . At a critical point $`W_0`$ of $`W`$ the five dimensional supergravity solution is anti-De Sitter space with cosmological constant equal to $`6W_0^2`$. To construct a domain wall, one would like to have two critical points $`k_0`$ and $`k_1`$ of $`W`$. Then one could write down a supergravity solution that interpolates between two different anti-De Sitter vacua, with the asymptotic values of the moduli being $`k_0`$ on one side of the wall and $`k_1`$ on the other. It was hoped that this might lead to a supergravity realisation of the Randall-Sundrum scenario . Unfortunately as we have seen, this construction does not appear to be possible, at least in its simplest form, because of the absence of multiple (supersymmetric) critical points. Finally, the attractor mechanism in four dimensions is somewhat similar to the five dimensional mechanism considered in this paper. It would be interesting if our methods could be used to shed some light on this important problem as well. Acknowledgements: We would like to thank M. Bershadsky, R. Gopakumar, S. Nam, A. Klemm, T. Pantev, and especially D. Morrison and C. Vafa for valuable discussions and support. The work of SZ was supported in part by the DOE grant DE-FG02-91ER40654. References relax A.C. Cadavid, A. Ceresole, R. D’Auria and S. Ferrara, “Eleven-dimensional supergravity compactified on Calabi-Yau threefolds,” Phys. Lett. B357, 76 (1995) hep-th/9506144. relax K. Becker, M. Becker and A. Strominger, “Five-branes, membranes and nonperturbative string theory,” Nucl. Phys. B456 (1995) 130 hep-th/9507158. relax M. Gunaydin, G. Sierra and P.K. Townsend, “The Geometry Of N=2 Maxwell-Einstein Supergravity And Jordan Algebras,” Nucl. Phys. B242, 244 (1984). relax S. Ferrara and R. Kallosh, “Supersymmetry and Attractors,” Phys. Rev. D54, 1514 (1996) hep-th/9602136. relax A. Chou, R. Kallosh, J. Rahmfeld, S. Rey, M. Shmakova and W.K. Wong, “Critical points and phase transitions in 5d compactifications of M-theory,” Nucl. Phys. B508, 147 (1997) hep-th/9704142. relax W.A. Sabra, “General BPS black holes in five dimensions,” Mod. Phys. Lett. A13, 239 (1998) hep-th/9708103. relax C. Vafa, “Black holes and Calabi-Yau threefolds,” Adv. Theor. Math. Phys. 2, 207 (1998) hep-th/9711067. relax G. Moore, “Arithmetic and attractors,” hep-th/9807087. relax P. Griffiths and J. Harris, “Principles of Algebraic Geometry,” Wiley-Interscience (1978). relax E. Witten, “Phase Transitions In M-Theory And F-Theory,” Nucl. Phys. B471, 195 (1996) hep-th/9603150. relax D. Morrison, “Beyond the Kähler cone,” Proc. of the Hirzebruch 65 conference on algebraic geometry (M. Teicher, ed.), Israel Math. Conf. Proc., vol. 9, 1996, pp. 361–376, alg-geom/9407007. relax Y. Kawamata, “Crepant blowing-up of 3-dimensional canonical singularities and its application to degenerations of surfaces,” Annals of Mathematics, 127 (1988), 93–163. relax K. Behrndt and M. Cvetic, “Supersymmetric domain wall world from D = 5 simple gauged supergravity,” hep-th/9909058. relax L. Randall and R. Sundrum, “An alternative to compactification,” hep-th/9906064.
no-problem/9912/cond-mat9912102.html
ar5iv
text
# Breakdown of the Kondo Effect in Critical Antiferromagnets A number of heavy-fermion systems present very peculiar non–Fermi-liquid properties which have been suggested to be a consequence of the proximity of the antiferromagnetic quantum critical point (QCP). The weak-coupling approach to quantum phase transitions, which describes many ferromagnetic compounds fairly well, fails in describing the temperature exponents of various thermodynamic and transport properties of these materials. Recent neutron scattering experiments in CeCu<sub>6-x</sub>Au<sub>x</sub> have suggested that the failure of this approach can be related to the unusual energy dependence of the spin susceptibility. The measured $`\omega /T`$ scaling of the spin susceptibility: $$\chi (\omega ,T)=\frac{1}{T^\alpha }f(\frac{\omega }{T})$$ (1) was fitted with: $$\chi ^1=c^1[f(𝐪)+(i\omega +aT)^\alpha ]$$ (2) where $`\alpha 0.8`$, while in the weak coupling approach $`\alpha =1`$. This result suggests that the anomalous non–Fermi-liquid behavior in these compounds may have a local origin. One possible interpretation is that the screening that leads to the Kondo effect is incomplete as a consequence of the proximity to the QCP . A first analysis of the Kondo effect in systems close to a QCP was performed more than twenty years ago by Larkin and Mel’nikov. They showed that the Kondo screening is not effective in itinerant electron ferromagnets close to the QCP. At the present there is no theory of the Kondo effect at the antiferromagnetic QCP. Some authors have suggested that disorder is an essential ingredient for the breakdown of the Kondo effect, and that it is therefore the origin of the anomalous properties of heavy-fermion materials. In the Kondo-disorder model suggested by Miranda, Dobrosavljević, and Kotliar , a randomness in the distribution of the hybridization matrices leads to a broad distribution of the Kondo temperatures with a tail that goes to zero. Some spins then remain unscreened at all temperatures and they can be the origin of anomalies in the thermodynamics and transport properties. A different approach is based on the metallic spin glass model introduced by Sengupta and Georges and Sachdev, Read and Oppermann . In these studies it was shown that, at the critical point, the spin fluctuation spectrum is non–Fermi-liquid-like, and the impurity spins develop a power law correlation function in time. Nevertheless the rôle of disorder in heavy-fermion materials and, in general, of the non-Fermi liquid properties of metals is still a hotly debated question. It is therefore important to understand whether these results are specific to the QCP under consideration, or if they survive also in clean systems. Another possible origin of the anomalous local susceptibility is the formation of a pseudo-gap in the single particle spectrum . In quantum critical systems that show superconductivity the pseudo-gap can be generated by superconducting fluctuations but, in general, the origin of the pseudo-gap in critical itinerant antiferromagnets is not clear at the moment and has been suggested mainly on the basis of studies employing Dynamical Mean Field Theory . One way to represent a system when the Kondo effect breaks down is to consider a spin living in a cavity in the presence of an effective Gaussian Weiss field $`h`$ with correlation function : $$Th_a(\tau )h_b(0)\frac{\delta _{a,b}}{\tau ^{2ϵ}}.$$ (3) Integrating over the Gaussian field $`h`$ we get the following contribution to the effective action of the local spin: $$I=g𝑑\tau \frac{S^a(\tau )S^a(\tau ^{})}{|\tau \tau ^{}|^{(2ϵ)}},$$ (4) where $`g`$ is the coupling constant. In itinerant antiferromagnets like CeCu<sub>6-x</sub>Au<sub>x</sub> this action has to be combined with the usual Kondo screening. Such a model was analyzed by Sengupta , who used an $`ϵ`$-expansion and found that the model with Kondo screening possesses an unstable non-Fermi-liquid fixed point. In the leading order in $`ϵ`$ the exponent of the spin-spin correlation function at the fixed point coincides with the exponent for the model without Kondo screening. For the latter case the results of Sengupta coincide with a naïve scaling analysis which gives $`[S]=\tau ^{ϵ/2}`$ and $`S(\tau )S(0)\tau ^ϵ`$ for the spin-spin correlation function. In frequency space we have $`\chi _0^1(\omega )(i\omega )^{1ϵ}`$. This approach naturally leads to $`\omega /T`$ scaling of the spin susceptibility of the form (1) with $`\alpha =1ϵ.`$ (5) In this paper we discuss the behavior of a single impurity in an antiferromagnetic host close to the QCP. Our aim is to obtain Sengupta’s phenomenological theory from a microscopic approach to a critical antiferromagnet. This would show that the Kondo effect in the system is not complete. We first analyze a model of itinerant electron antiferromagnets (IEAF), which is a possible model for heavy-fermion systems, and then briefly consider also a model for an insulator (IAF). The results are very similar to those obtained for metallic spin glasses and suggest that proximity to the QCP is enough to suppress Kondo screening. We use a path integral approach and represent the bulk as a bosonic field $`𝐧(𝐱,\tau )`$ interacting with the impurity via the following action: $$S=S^{bulk}+J𝑑\tau 𝐧(0,\tau )𝐒(\tau ),$$ (6) where $`S^{bulk}={\displaystyle 𝑑\omega d^Dk[\frac{1}{2}𝐧(𝐤,\omega )D_0^1(𝐤,\omega )𝐧(𝐤,\omega )]}+`$ (7) $`g{\displaystyle 𝑑\tau d^Dx(𝐧(𝐱,\tau )^2)^2}.`$ (8) The form of the bare propagator, $`D_0(𝐤,\omega )`$, depends on the host material. In the case of the $`D`$-dimensional IEAF the spin-spin correlation function of the host can be obtained in the random phase approximation and is : $$D_0(𝐤,\omega )^1=|\omega |+k^2+\delta ,$$ (9) where $`\delta `$ vanishes at the critical point. In relation to the bulk properties the $`𝐧`$-field can be treated as Gaussian if the effective dimensionality of the bulk theory exceeds the upper critical dimension. This condition is satisfied for the IEAF QCP, for $`D>2`$ (which is the case that we shall discuss in this paper), while for the IAF the quartic term renormalizes the bulk propagator giving rise to an anomalous power. Treating the bulk field as Gaussian and integrating it out we obtain an effective theory of the form (4), with: $$ϵ=2D/2,\alpha =D/21.$$ (10) In order to check whether the interaction between the modes of the $`𝐧`$-field, irrelevant for the bulk properties, remains irrelevant for the impurity, we have calculated the quartic term in the impurity effective action: $`S_{\mathrm{eff}}[𝐒]={\displaystyle \frac{1}{2J}}{\displaystyle 𝑑\tau 𝐒(\tau )D(0,\tau \tau ^{})𝐒(\tau ^{})}+`$ (11) $`g{\displaystyle 𝑑\tau _1𝑑\tau _2𝑑\tau _3𝑑\tau _4𝐒(\tau _1)𝐒(\tau _2)𝐒(\tau _3)𝐒(\tau _4)\mathrm{\Gamma }(\tau _1,\tau _2,\tau _3,\tau _4)}`$ (12) where $`\mathrm{\Gamma }(\tau _1,\tau _2,\tau _3,\tau _4)`$ $`=`$ $`{\displaystyle 𝑑\tau 𝑑\tau ^{}d^Dxd^Dx^{}D(𝐱,\tau _1\tau )}`$ (13) $`D(𝐱,\tau _2\tau )g(𝐱𝐱^{},\tau \tau ^{})D(𝐱^{},\tau \tau _3)D(𝐱^{},\tau \tau _4);`$ (14) $`g(𝐱,\tau )`$ is the renormalized bulk vertex and $`D(𝐱,\tau )`$ the renormalized bulk propagator. As mentioned above, for this model the propagator is not renormalized and the same is true for the interaction, so $`\mathrm{\Gamma }`$ scales like $`\mathrm{\Gamma }\tau ^{13/2D}`$ and the spin operator like $`S\tau ^{D/41}`$. Then the four-spin interaction scales like $`\tau ^{1D/2}`$, and is irrelevant for $`D>2`$, which is the case of interest. For CeCu<sub>6-x</sub>Au<sub>x</sub> the spin fluctuation spectrum is anisotropic in space and the coefficient of the quadratic term is very small in one direction . In the framework of this model such anisotropy can only be obtained introducing it at a bare level. We therefore consider also: $$D_0^1(𝐤,\omega )=|\omega |+Ak_{}^2+Bk_{}^4+\delta $$ (15) In this case we obtain the same results with an effective dimension given by: $$D=D_{\mathrm{eff}}=5/2.$$ (16) This is related to the fact that, in the action, the soft direction provides less phase space and counts as ‘half a dimension’. We then find from (10) that $`\alpha =0.5`$ in the isotropic (3D) case, and $`\alpha =0.25`$ in the anisotropic one. This simple analysis shows that in critical itinerant electron antiferromagnets, described by the model (7) with the bare propagator (9), the screening effect is incomplete and the impurity develops a power law correlation function. The effect however is too strong to describe the observed behavior of CeCu<sub>6-x</sub>Au<sub>x</sub>. It is interesting to compare these results with the behavior of an insulator. The QCP in such a system is described by the (D + 1)-dimensional O(3) non-linear sigma model with a special value of the coupling constant. At the QCP the propagator has the form: $$D(𝐤,\omega )^1=(\omega ^2+k^2)^{1\eta /2}$$ (17) Thus in D dimensions we have: $$\alpha =D2+\eta .$$ (18) In $`D=3`$, $`\eta =0`$ which gives $`\alpha =1`$ (modulo logarithmic corrections). For $`D=2`$ the index $`\eta `$ can be obtained by taking into account that the (2+1)-dimensional QCP is in the same universality class as the ferromagnetic phase transition point in a three-dimensional Heisenberg ferromagnet. For the latter system the exponent $`\eta `$ is known to be 0.031 . Thus in D=2 we have: $$\alpha =\eta .$$ (19) It is reasonable to expect that for $`D=2.5`$ the index $`\eta `$ is also small, so we get the estimate $`\alpha 0.5`$. In summary, we have studied the the behavior of one impurity in two magnetic systems close to the antiferromagnetic quantum critical point. As a consequence of the proximity of the QCP, the Kondo effect doesn’t occur, and the impurity develops a power law correlation function in time. The emerging picture is very similar to the one proposed to describe the non–Fermi-liquid behavior in heavy-fermion materials and to that found in some works on metallic spin glasses. It suggests that a system, approaching the QCP, undergoes a phase transition from a Kondo phase, with Fermi liquid properties, to a non–Fermi-liquid phase, dominated by field fluctuations. This suggests that incomplete Kondo screening and power law dependence of the spin-spin correlation function are not necessarily related to disorder, but can be simply consequences of proximity to the AFM critical point. It is a pleasure to thank Alexei M. Tsvelik for many interesting discussions on the subject and for comments on the draft, and C.Hooley for reading the manuscript.
no-problem/9912/astro-ph9912091.html
ar5iv
text
# Non-Voigt Lyman-alpha absorption line profiles ## 1 Introduction Recent hydrodynamic simulations of large-scale structure formation in cold dark matter (CDM) dominated cosmologies (e.g. Cen et al. 1994, Zhang, Anninos & Norman 1995, Hernquist et al. 1996, Theuns et al. 1998) have been able to naturally reproduce the Lyman-alpha (Ly$`\alpha `$) forest. These simulations have been remarkably accurate in reproducing the column density, and Doppler width distribution, as well as evolutionary properties seen in the Ly$`\alpha `$ forest of real QSO spectra (e.g. Kirkman & Tytler 1997, Lu et al. 1996). Early attempts to model the Ly$`\alpha `$ absorption lines considered pressure-confined clouds (Sargent et al. 1980, Ostriker & Ikeuchi 1983) inside a hot intergalactic medium (IGM). Other ideas used dark matter mini-halos to gravitationally confine the Ly$`\alpha `$ clouds (Rees 1986). In these models a Voigt profile naturally arises from a Maxwellian thermal distribution, or Gaussian turbulent motion of the gas. This lead to the fitting of Voigt profiles to the data; a method now generally used to analyse the complex blended features seen in the high resolution Keck spectra (e.g. Kirkman & Tytler 1997). The numerical simulations have brought about a paradigm shift in our understanding of the IGM. No longer is the Ly$`\alpha `$ forest viewed as many isolated objects within a hot tenuous IGM, but rather it is seen as the density fluctuations of the IGM itself caused by gravitational collapse into a network of sheets and filamentary structures (sometimes described as a fluctuating Gunn-Peterson effect, after Gunn & Peterson 1965). The observed spectrum is thus described by the density, temperature and velocity of the gas at all points along the line-of-sight, and the absorption lines that make up the forest are caused by extended over-densities, rather than discrete clouds. This change in our understanding of the IGM has called into question the justification behind Voigt profile fitting. In recent years, dramatic improvements in the resolution and signal-to-noise ratio (S/N) of data has lead to complex blends of Voigt profiles in order to fit the absorption features. Problems due to the somewhat arbitrary number of components, and the non-uniqueness of the fitted solutions have become apparent. Now the simulations have shown that the physical interpretation of the solutions themselves are in doubt. Despite this there is little direct evidence that the Ly$`\alpha `$ forest line profiles are significantly different from Voigt profiles. Many absorption lines appear to have remarkably Gaussian velocity distributions, especially at lower redshift where blending is less of a problem. On the other hand, non-Gaussian features could as well be attributed to blends of two or more Voigt profiles as to an intrinsically non-Voigt distribution. Outram et al. (1999) proposed a method to detect departures from Voigt profiles of the absorption lines in a statistical way. They applied it to the Ly$`\alpha `$ forest spectrum of GB1759+7539, but detected no significant evidence of non-Voigt profiles. In this letter we develop the method proposed by Outram et al.. In the next section we discuss the signature of non-Voigt profiles in Ly$`\alpha `$ forest spectra, and present details of the method to detect it. Then we apply this method to simulated forest spectra to show that the profiles seen in hydrodynamic simulations are indeed statistically non-Voigt, before finally discussing the implications of this result. ## 2 Detecting Non-Voigt Profiles If the Ly$`\alpha `$ forest is now viewed as density fluctuations within the IGM, then what does this imply about the physical nature of Ly$`\alpha `$ absorbers? Typical systems have column densities of around $`13.0<`$log $`N`$(H I)$`<14.0`$ (log cm<sup>-2</sup>). Observations of quasar pairs have shown that these systems can be very large; of the order of 500 kpc across (Dinshaw et al. 1994). They are therefore highly ionized, and hence contain a significant fraction of all the baryons at $`z=3`$, yet they are only slightly overdense ($`\rho /\overline{\rho }110`$) compared to the mean baryonic density. Although a large variety of structures are seen in the simulations, they tend to have a flattened geometry, in the form of “pancakes” or filamentary structures. They are unlikely to be virialized objects, and are probably transient density fluctuations undergoing gravitational collapse in one direction, whilst still expanding with the Hubble flow in others (Haehnelt 1996). The individual absorption profiles depend on the orientation and exact geometry of that system, and may well be asymmetric. In general though, if Ly$`\alpha `$ absorbers are in a state of collapse then the bulk motion, and compressional heating of the gas should lead to broad non-Maxwellian wings (Rauch 1996, Nath 1997). Equally if the absorber is extended in space, and undergoing Hubble expansion along the line-of-sight then the line profile would also deviate from that predicted by the simple model. The expected shape of a typical absorption line from such objects is shown in figure 1. The central core is that of a log $`N`$(H I)$`=13.0`$, $`b=20`$ km s<sup>-1</sup> absorption line, with broad non-Maxwellian wings due to infalling gas. When a spectrum is fitted with Voigt profiles, these wings may be fitted by a single broad, low $`N`$(H I) component, or perhaps by two low $`N`$(H I) components, either side of the main line, or simply not fitted at all. Rauch (1996) considered the first of these possibilities, searching for broad lines fitted simultaneously in redshift space with narrower ones. He looked at both real and simulated data for an anti-correlation of Doppler parameters for profile pairs at small separations, detected a positive signal in both data-sets and concluded that there was evidence of a departure from Voigt profiles. In order to investigate the line profiles further, we estimate the departures from the Voigt profile in absorption lines using the following method. Firstly, for the spectrum in question, Voigt profiles are fitted to all absorption features. Since we are looking for departures from Voigt profiles in stronger lines, the raw spectrum is divided by an artificial spectrum made by inserting the Voigt profile fits on the continuum. The obvious thing to do is to divide through by the fitted Voigt profiles, only leaving any non-Voigt residuals in the spectrum. However, this is complicated by the fact that many of these residuals will have been fitted themselves, using a blend of low column density lines with high or low Doppler width, and therefore the non-Voigt signal could be removed as well. In an attempt to overcome this, all the systems with log $`N`$(H I)$`<12.5`$ are left in for the examples here, though the precise limit can be any desired. Although this leaves in many randomly-distributed small absorption features, together with the non-Voigt residuals of removed systems, the final step is then to co-add many of the absorption systems whose Voigt core had been divided out. Any signature of non-Voigt profile should be reinforced with this stacking, whereas randomly distributed small absorption lines should be averaged away if enough lines are stacked. The expected residuals using this method depend on how many components are used to fit the non-Voigt wings. The general shapes can be seen in figure 2. If the solid profile shown in figure 1 is fitted by a single Voigt profile, which is then divided out, then the residual expected is the solid curve in figure 2. As expected there are dips due to the two wings, and inside these are two peaks due to the fact that the Doppler width of the fitted Voigt profile is forced to be wider than that of the core. If one or two extra lines are fitted to the wings then the residuals would look like the dotted and dashed curves in figure 2 respectively. When stacking hundreds of lines, a combination of all three effects would be expected, and the ratio of each would depend on the S/N of the data, and the fitting criteria used. The randomly-distributed small lines would also have an effect; depressing the entire spectrum by about 0.5%. Within about 20 km s<sup>-1</sup> of the absorption line centre (1215.60 - 1215.74Å) the division to remove the profile core becomes uncertain, especially for the higher column density lines, where the residual signal is near zero. However, since we are looking for a signal in the wings, as opposed to the core, this region was ignored for the analysis. ## 3 Simulated Ly$`\alpha `$ Forest Profiles In order to test this idea further we used artificial spectra taken from Theuns et al. (1998). The spectra were created using a simulation code based on a combination of a heirarchical particle-particle-particle-mesh (P3M) scheme for gravity and smoothed particle hydrodynamics (SPH) for gas dynamics. The simulations assume a standard adiabatic, scale-invariant CDM cosmology ($`\mathrm{\Omega }=1`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$, $`H_0=50`$ km s$`^1`$Mpc<sup>-1</sup>, $`\sigma _8=0.7`$, $`\mathrm{\Omega }_B=0.05`$). A box size of 5.5 Mpc was used, with $`64^3`$ SPH, and an equal number of dark matter particles. Finally, the assumed background radiation spectrum was half the amplitude of the spectrum computed by Haardt & Madau (1996). For further details, refer to Theuns et al. (1998) where this simulation is named A-5-64. Theuns et al. tested the convergence of this simulation by comparing it to a similar run with even higher numerical resolution (A-2.5-64). They concluded that the A-5-64 run is very close to convergence and that conclusions drawn from this simulation are reliable. We took the absorption spectra computed from 128 different lines-of-sight through the box at redshift $`z=3`$. Each spectrum was convolved with a Gaussian profile with full width at half maximum, FWHM = 7 km s<sup>-1</sup>, and Gaussian noise was added with standard deviation $`\sigma =0.02`$ (S/N=100 for pixels at the continuum) to mimic spectra observed using the HIRES spectrograph on the Keck Telescope. The spectra were wrapped around to enable fitting of features that were otherwise close to the edge of the region. Voigt profiles were fitted, using a $`\chi ^2`$ minimization technique, to the absorption features in order to determine the redshifts, column densities and Doppler widths of the Ly$`\alpha `$ absorption lines, using an automated version of the software package VPFIT (Webb 1987; Rauch et al. 1992). When comparing simulated data to observed spectra, one of the major problems to be overcome is that of continuum fitting. The simulated spectra show typically around 1-2% zero-order absorption, which would be removed by the continuum fitting procedure for real data. This is usually done by using low-order polynomial fits to apparently unabsorbed parts of the spectrum. The regions over which the continuum is fitted are typically many times the size of a simulated spectrum. In an attempt to overcome this problem, and treat the simulated data in a similar manner to real data, VPFIT has been developed to simultaneously fit a linear multiplicative factor to the initial assumed continuum (unity for the simulations) during the $`\chi ^2`$ minimization procedure. The continuum was lowered by an average of 1.6% during the fitting of the artificial spectra. The spectra were then divided through by the profiles of the stronger fitted lines (log $`N`$(H I)$`>12.5`$), leaving the residuals due to weaker lines in. Care was taken to give zero weight to those regions where the residual flux was below 20%, and correct the errors for the other regions accordingly. The resulting residual spectra were co-added, weighted according to variance, centred on the rest wavelength of the removed Voigt profile absorption lines with parameters $`13.0<`$ log $`N`$(H I)$`<14.0`$, and $`15.0<b<60.0`$ km s<sup>-1</sup>. The result, with 235 lines stacked, can be seen in figure 3. If a line profile was indeed narrow in the centre, but with broader-than-Maxwellian wings, the residual profile after fitting with a Voigt profile, then dividing through by this fit would be as shown by the smooth surve in figure 3 (modelled by a $`b=20`$ km s<sup>-1</sup>, log $`N`$(H I)$`=13.0`$ line with broader wings simulated by a coincident $`b=60`$ km s<sup>-1</sup>, log $`N`$(H I)$`=12.0`$ line). This pattern seems to follow that seen in the residual spectrum remarkably well. The dips at 1215.50 & 1215.85Å, and the peaks at 1215.55, and 1215.85Å are similar to those predicted by the model curve. The entire spectrum has been depressed by about 0.5% due to the small (log $`N`$(H I)$`<12.5`$) features not removed, and these could also explain the small irregularities away from the line centre. These irregularities would be expected to diminish as more lines are stacked. Were the profiles of the absorption features intrinsically random blends of Voigt profiles, then no such residual features would be expected in the co-added spectrum. This is therefore clear evidence of intrinsic departures from the Voigt profile in the simulated spectra. ## 4 Discussion The next step is to apply this method to real data in order to determine whether the non-Voigt absorption profiles are a simulated phenomenon, or a real physical property of the absorbers. The stacked residual spectrum in figure 3 was created using 235 separate lines at S/N=100. This is of the same order as the number of observed lines in a single high quality Keck spectrum, and so a result should be easily achievable. Care would need to be taken to remove regions where heavy-element absorption is detected, and the continuum should be treated in a similar manner in order to produce a fair comparison. The latter would mean fitting the spectrum in very small chunks ($`500`$ km s<sup>-1</sup>), simultaneously introducing a local fit to the continuum in a similar manner to that described above. The S/N of the spectrum is also important as it could change the way that the residuals are fitted and hence the resulting residual profiles, as discussed above. It has been noted that the absorption line profiles of individual systems in simulated spectra often appear to have broad wings or asymmetries, signifying a non-Voigt profile (e.g. Davé et al. 1997). Rauch (1996) showed that pairs of lines with small separations have anti-correlated Doppler-widths, suggesting that the Voigt profile decompositions are not actual blends. We have introduced a method to test whether or not the line profiles in the Ly$`\alpha `$ forest are intrinsically non-Voigt. A similar technique was applied to GB1759+7539 (Outram et al. 1999) with no sign of any signal. However, a much more extended sample is needed, with care paid to continuum uncertainties before the results can be compared. If wings are found in the real data, it will provide a powerful confirmation of the SPH models. If they are not, then they provide a crucial test and we need the modellers to think again. The data analysis was performed on the Starlink-supported computer network at the Institute of Astronomy. PJO acknowledges support from PPARC and Queens’ College.
no-problem/9912/astro-ph9912345.html
ar5iv
text
# ‘FIRST LIGHT’ IN THE UNIVERSE; WHAT ENDED THE “DARK AGE”? ## 1 INTRODUCTION One of the outstanding achievements of cosmology is that the state of the universe when it was only a few seconds old seems to be well understood. The details have firmed up, and we can make confident predictions about primordial neutrinos, and He and D nucleosynthesis. This progress, spanning the last 30 years, owed a lot, on the theoretical side, to David Schramm and his Chicago colleagues. The way the universe cools, and eventually recombines, and the evolution of the (linear) perturbations that imprint angular structures on the microwave background, is also well understood. But this gratifying simplicity ends when primordial imhomogeneities and density contrasts evolve into the non-linear regime. The Universe literally entered a dark age about 300,000 years after the big bang, when the primordial radiation cooled below 3000K and shifted into the infrared. Unless there were some photon input from (for instance) decaying particles, or string loops, darkness would have persisted until the first non-linearities developed into gravitationally-bound systems, whose internal evolution gave rise to stars, or perhaps to more massive bright objects. Spectroscopy from the new generation of 8-10 metre telescopes now complements the sharp imaging of the Hubble Space Telescope (HST); these instruments are together elucidating the history of star formation, galaxies and clustering back, at least, to redshifts $`z=5`$. Our knowledge of these eras is no longer restricted to ‘pathological’ objects such as extreme AGNs – this is one of the outstanding astronomical advances of recent years. In addition, quasar spectra (the Lyman forest, etc) are now observable with much improved resolution and signal-to-noise; they offer probes of the clumping, temperature, and composition of diffuse gas on galactic (and smaller) scales over an equally large redshift range, rather as ice cores enable geophysicists to probe climatic history. Detailed sky maps of the microwave background (CMB) temperature (and perhaps its polarization as well) will soon offer direct diagnostics of the initial fluctuations from which the present-day large-scale structure developed. Most of the photons in this background have travelled uninterruptedly since the recombination epoch at $`z=1000`$, when the fluctuations were still in the linear regime. We may also, in the next few years, discover the nature of the dark matter; computer simulations of structure formation will not only include gravity, but will incorporate the gas dynamics and radiation of the baryonic component in a sophisticated way. But these advances may still leave us, several years from now, uncertain about the quantitative details of the whole era from $`10^6`$ to $`10^9`$ years – the formation of the first stars, the first supernovae, the first heavy elements; and how and when the intergalactic medium was reionized. Even by the time Planck/Surveyor and the Next Generation Space Telescope (NGST) have been launched, we may still be unable to compute crucial things like the star formation efficiency, feedback from supernovae. etc – processes that ‘semi-analytic’ models for galactic evolution now parametrise in a rather ad hoc way. And CMB fluctuations will still be undiscernable on the very small angular scales that correspond to subgalactic structures, which, in any hierarchical (‘bottom up’) scenario would be the first non-linearities to develop. So the ‘dark age’ is likely to remain a topic for lively controversy at least for the next decade. ## 2 COSMOGONIC PRELIMINARIES: <br>MOLECULAR HYDROGEN AND UV <br>FEEDBACK ### 2.1 The H<sub>2</sub> cooling regime Detailed studies of structure formation generally focus on some variant of the cold dark matter (CDM) cosmogony – with a specific choice for $`\mathrm{\Omega }_{\mathrm{CDM}}`$, $`\mathrm{\Omega }_b`$ and $`\mathrm{\Lambda }`$. Even if such a model turns out to be oversimplified, it offers a useful ’template’ whose main features apply generically to any ‘bottom up’ model for structure formation. There is no minimum scale for gravitational aggregation of the CDM. However, the baryonic gas does not ‘feel’ the very smallest clumps, which have very small binding energies: pressure opposes condensation of the gas on scales below a (time dependent) Jeans scale – roughly, the size of a comoving sphere whose boundary expands at the sound speed. The overdense clumps of CDM within which ‘first light’ occurs must provide a deep enough potential well to pull the gas into them . But they must also – a somewhat more stringent requirement – yield, after virialisation, a gas temperature such that radiative cooling is efficient enough to allow the gas to contract further. The dominant coolant for gas of primordial composition is molecular hydrogen. This has been considered by many authors, from the 1960s onwards; see recent discussions by, for instance, Tegmark et al. (1997), Haiman, Rees and Loeb (1997), Haiman, Abel and Rees (1999). In a uniformly expanding universe, only about $`10^6`$ of the post-recombination hydrogen is in the form of H<sub>2</sub>. However this rises to $`10^4`$ within collapsing regions – high enough to permit cooling at temperatures above a few hundred degrees. So the first ‘action’ would have occurred within clumps with virial temperatures of a few hundred degrees (corresponding to a virial velocity of 2-3 km/s). Their total mass is of order $`10^5\mathrm{M}_{}`$ ; the baryonic mass is smaller by a factor $`\mathrm{\Omega }_b/\mathrm{\Omega }_{\mathrm{CDM}}`$. The gas falling into such a clump exhibits filamentary substructure: the contraction is almost isothermal, so the Jeans mass decreases as the density rises. Abel, Bryan and Norman (1999) have simulated the collapse, taking account of radiative transfer in the molecular lines, up to $`10^{12}`$ times the turnaround density; by that stage the Jeans mass (and the size of the smallest bound subclumps) has dropped to $`50100\mathrm{M}_{}`$. There is still a large gap to be bridged between the endpoint of these impressive simulations and the formation of ‘protostars’. Fragmentation could continue down to smaller masses; on the other hand, there could be no further fragmentation – indeed, as Bromm, Coppi and Larson (1999) argue, infall onto the largest blobs could lead to masses much higher than $`100\mathrm{M}_{}`$. And when even one star has formed, further uncertainties ensue. Radiation or winds may expel uncondensed material from the shallow potential wells, and exert the kind of feedback familiar from studies of giant molecular clouds in our own Galaxy. In addition to this local feedback, there is a non-local effect due to UV radiation. Photons of $`h\nu >11.18`$ eV can photodissociate H<sub>2</sub>, as first calculated by Stecher and Williams (1967). These photons, softer than the Lyman limit, can penetrate a high column density of HI and destroy molecules in virialised and collapsing clouds. H<sub>2</sub> cooling would be quenched if there were a UV background able to dissociate the molecules as fast as they form. The effects within clouds have been calculated by Haiman, Abel and Rees (1999) and Ciardi, Ferrara and Abel (1999). (If the radiation from the first objects had a non-thermal component extending up to KeV energies, as it might if a contribution came from accreting compact objects or supernovae, then there is a counterbalancing positive feedback. X-ray photons penetrate HI, producing photoelectrons (which themselves cause further collisional ionizationwhile being slowed down and thermalised); these electrons then catalyse further H<sub>2</sub> formation via H<sup>-</sup>). It seems most likely that the negative feedback due to photoionization is dominant. When the UV background gets above a certain threshold, H<sub>2</sub> is prevented from forming and molecular cooling is suppressed. Under all plausible assumptions about UV spectral shape, etc, this threshold is reached well before there has been enough UV production to ionize most of the medium. Therefore, only a small fraction of the UV that ionized the IGM can have been produced in systems where star formation was triggered by molecular cooling. ### 2.2 The atomic-cooling stage An atomic H-He mixture behaves adiabatically unless T is as high as 8-10 thousand degrees, when excitation of Lyman alpha by the Maxwellian tail of the electrons provides efficient cooling whose rate rises steeply with temperature. When H<sub>2</sub> cooling has been quenched, primordial gas cannot therefore cool and fragment within bound systems unless their virial temperature reaches $`10^4\mathrm{K}`$. The corresponding mass is $`10^8\mathrm{M}_{}`$. Most of the UV that ionized the IGM therefore came from stars (or perhaps from accreting black holes) that formed within systems of total mass $`\mathrm{}>10^8\mathrm{M}_{}`$. ## 3 THE EPOCH OF IONIZATION <br>BREAKTHROUGH ### 3.1 UV production in ‘subgalaxies’ The IGM would have remained predominantly neutral until ‘subgalaxies’, with total (dark matter) masses above $`10^8\mathrm{M}_{}`$ and virial velocities 20 km/s, had generated enough photoionizing flux from O-B stars, or perhaps accreting black holes (see Loeb (1999) and references cited therein). How many of these ‘subgalaxies’ formed, and how bright each one would be, depends on another big uncertainty: the IMF and formation efficiency for the Population III objects. The gravitational aspects of clustering can all be modeled convincingly by computer simulations. So also, now, can the dynamics of the baryonic (gaseous) component – including shocks and radiative cooling. The huge dynamic range of the star-formation process cannot be tracked computationally up to the densities at which individual stars condense out. But the nature of the simulation changes as soon as the first stars (or other compact objects) form. The first stars (or other compact objects) exert crucial feedback – the remaining gas is heated by ionizing radiation, and perhaps also by an injection of kinetic energy via winds and even supernova explosions – which is even harder to model, being sensitive to the IMF, and to further uncertain physics. Three major uncertainties are: (i) What is the IMF of the first stellar population? The high-mass stars are the ones that provide efficient (and relatively prompt) feedback. It plainly makes a big difference whether these are the dominant type of stars, or whether the initial IMF rises steeply towards low masses (or is bimodal), so that very many faint stars form before there is a significant feedback. The Population III objects form in an unmagnetised medium of pure H and He, bathed in background radiation that may be hotter than 50 K when the action starts (at redshift $`z`$ the ambient temperature is of course $`2.7(1+z)`$ K). Would these conditions favour a flatter or a steeper IMF than we observed today? This is completely unclear: the density may become so high that fragmentation proceeds to very low masses (despite the higher temperature and absence of coolants other than molecular hydrogen); on the other hand, massive stars may be more favoured than at the present epoch. Indeed, fragmentation could even be so completely inhibited that the first things to form are supermassive holes. (ii) Quite apart from the uncertainty in the IMF, it is also unclear what fraction of the baryons that fall into a clump would actually be incorporated into stars before being re-ejected. The retained fraction would almost certainly be an increasing function of virial velocity: gas more readily escapes from shallow potential wells. (iii) The influence of the Population III objects depends on how much of their radiation escapes into the IGM. Much of the Lyman continuum emitted within a ‘subgalaxy’ could, for instance, be absorbed within it. The total number of massive stars or accreting holes needed to build up the UV background shortward of the Lyman limit and ionize the IGM, and the concomitant contamination by heavy elements, would then be greater. All these three uncertainties would, for a given fluctuation spectrum, affect the redshift at which molecules were destroyed, and at which full ionization occurred. Perhaps I’m being pessimistic, but I doubt that either observations or theoretical progress will have eliminated these uncertainties about the ‘dark age’ even by the time NGST flies. ### 3.2 How uncertain is the ionization epoch? Even if we knew exactly what the initial fluctuations were, and when the first bound systems on each scale formed, the above-mentioned uncertainties would render the ionization redshift is uncertain by at least a factor of 2. This can be easily seen as follows: Ionization breakthrough requires at least 1 photon for each ionized baryon in the IGM (one photon per baryon is obviously needed; extra photons are needed to balance recombinations, which are more important in clumps and filaments than in underdense regions). An OB star produces $`10^410^5`$ ionizing photons for each constituent baryon, so (again in very round numbers) $`10^3`$ of the baryons must condense into stars with a standard IMF to supply the requisite UV. Photoionization will be discussed in Madau’s contribution to this conference. Earlier references include Ciardi and Ferrara (1997), Gnedin and Ostriker (1998), Madau, Haardt and Rees (1999) and Gnedin (1999). We can then contrast two cases: (A) If the star formation were efficient, in the sense that all the baryons that ‘went non-linear’, and fell into a CDM clump larger than the Jeans mass, turned into stars, then the rare 3-$`\sigma `$ peaks on mass-scales $`10^8\mathrm{M}_{}`$ would suffice. On the other hand: (B) Star formation could plausibly be so inefficient that less than 1 percent of the baryons in a pregalaxy condense into stars, the others being expelled by stellar winds, supernovae, etc., In this case, production of the necessary UV would have to await the collapse of more typical peaks (1.5-$`\sigma `$, for instance). A 1.5-$`\sigma `$ peak has an initial amplitude only half that of a 3-$`\sigma `$ peak, and would therefore collapse at a value of $`(1+z)`$ that was lower by a factor of 2. For plausible values of the fluctuation amplitude this could change $`z_i`$ from 15 (scenario A) to 7 (scenario B). There are of course other complications, stemming from the possibility that most UV photons may be reabsorbed locally; moreover in Scenario B the formation of sufficient OB stars might have to await the build-up of larger systems, with deeper potential wells, in which stars could form more efficiently. The above examples have assumed a ‘standard’ IMF, and there is actually further uncertainty. If the Population III IMF were biased towards low-mass stars, the situation resembles inefficient star formation in that a large fraction of the baryons (not just the rare 3-$`\sigma `$ peaks) would have to collapse non-linearly before enough UV had been generated to ionize the IGM. By the time this happened, a substantial fraction of the baryons could have condensed into low mass stars. This population could even contribute to the MACHO lensing events (see section 6). ### 3.3 Detecting ‘pregalaxies’ at very high redshift. What is the chance of detecting the ancient ‘pregalaxies’ that ionized the IGM at some redshift $`z_i>5`$? The detectability of these early-forming systems, of subgalactic mass, depends which of the two scenarios in 3.2 (above) is nearer the truth. If B were correct, the individual high-z sources would have magnitudes of 31, and would be so common that there would be about one per square arc second all over the sky; on the other hand, option A would imply a lower surface density of brighter (and more readily detectable) sources for the first UV (Miralda-Escudé and Rees (1998), Barkana and Loeb (1999)). There are already some constraints from the Hubble Deep Field, particularly on the number of ‘miniquasars’ (Haiman, Madau and Loeb 1999). Objects down to 31st magnitude could be detected from the ground by looking at a field behind a cluster where there might be gravitational-lens magnification, but firm evidence is likely to await NGST. Note that scenarios A and B would have interestingly different implications for the formation and dispersal of the first heavy elements. If B were correct, there would be a large number of locations whence heavy elements could spread into the surrounding medium; on the other hand, scenario A would lead to a smaller number of brighter and more widely-spaced sources. ### 3.4 The ‘breakthrough’ epoch Quasar spectra tell us that the diffuse IGM is almost fully ionized back to $`z=5`$, but we do not know when it in effect became an HII region. The IGM would already be inhomogeneous at the time when the ionization occurred. The traditional model of expanding HII regions that overlap at a well defined epoch when ‘breakthrough’ occurs (dating back at least to Arons and McCray (1972)) is consequently rather unrealistic. By the time ionization occurs the gas is so inhomogeneous that half the mass (and far more than half of the recombinations) is within 10 percent of the volume. HII regions in the ‘voids’ can overlap (in the sense that the IGM becomes ionized except for ‘islands’ of high density) before even half the material has been ionized. Thereafter, the overdense regions would be ‘eroded away’: Stromgren surfaces encroach into them; the neutral regions shrink and present a decreasing cross-section; the mean free path of ionizing photons (and consequently the UV background intensity J) goes up (Miralda-Escudé, Haehnelt and Rees 1999, Gnedin 1999). The thermal history of the IGM beyond $`z=5`$ is relevant to the modelling of the absorption spectra of quasars at lower redshifts. The recombination and cooling timescales are comparable to the cosmological expansion timescale. Therefore the ‘texture’ and temperature of the filamentary structure responsible for the lines in the Lyman alpha ‘forest’ yield fossil evidence of the thermal history at higher redshifts. ### 3.5 Black hole formation and AGNs at high $`𝒛`$? The observations of high-redshift galaxies tell us that some structures (albeit perhaps only exceptional ones) must have attained galactic scales by the epoch $`z=5`$. Massive black holes (manifested as quasars) accumulate in the deep potential wells of these larger systems. Quasars may dominate the UV background at $`z<3`$: if their spectra follow a power-law, rather than the typical thermal spectrum of OB stars, then quasars are probably crucial for the second ionization of He, even if H was ionized primarily by starlight. (One interesting point that somewhat blurs this issue has recently been made by Tumlinson and Shull (1999). They note that, if the metallicity were zero, there would be no CNO cycle; high-mass stars therefore need to contract further before reaching the main sequence, and so have hotter atmospheres, emitting more photons above the He ionization edge.) At redshifts $`z=10`$, no large galaxies may yet have assembled, but CDM-type models suggest that ‘subgalaxies’ would exist. Would these have massive holes (perhaps ‘mini-AGNs’) in their centres? This is interesting for at least two reasons: first, the answer would determine how many high-energy photons, capable of doubly-ionizing He, were produced at very high redshifts (Haiman and Loeb 1998); second, the coalescence of these holes, when their host ‘subgalaxies’ merge to form large galaxies, would be signalled by pulse-trains of low-frequency gravitational waves that could be detected by space-based detectors such as LISA (Haehnelt 1994). The accumulation of a central black hole may require virialised systems with large masses and deep potential wells (cf Haehnelt and Rees 1993, Haehnelt and Kauffmann (1999)); if so, we would naturally expect the UV background at the highest redshifts to be contributed mainly by stars in ‘subgalaxies’. However, this is merely an expectation; it could be, contrariwise, that black holes readily form even in the first $`10^8\mathrm{M}_{}`$ CDM condensations (this would be an extreme version of a ‘flattened’ IMF), Were this the case, the early UV production could be dominated by black holes. This would imply that the most promising high-z sources to seek at near-IR wavelengths would be miniquasars, rather than ‘subgalaxies’. It would also, of course, weaken the connection between the ionizing background and the origin of the first heavy elements. ### 3.6 Distinguishing between objects with $`𝒛\mathbf{>}𝒛_𝒊`$ and $`𝒛\mathbf{<}𝒛_𝒊`$ The blanketing effect due to the Lyman alpha forest – known to be becoming denser towards higher redshifts, and likely therefore to be even thicker beyond $`z=5`$ – would be severe, and would block out the blue wing of Lyman alpha emission from a high-$`z`$ source. Such objects may still be best detected via their Lyman alpha emission even though the absorption cuts the equivalent width by half. But at redshifts larger than $`z_i`$ – in other words, before ionization breakthrough – the Gunn-Peterson optical depth is so large that any Lyman alpha emision line is blanketed completely, because the damping wing due to IGM absorption spills over into the red wing (Miralda-Escudé and Rees 1998). This means that any objects detectable beyond $`z`$ would be characterised by a discontinuity at the redshifted Lyman alpha frequency. The Lyman alpha line itself would not be detectable (even though this may be the most prominent feature in objects with $`z<z_i`$). ## 4 RADIO AND MICROWAVE PROBES OF THE IONIZATION EPOCH ### 4.1 CMB fluctuations as a probe of the ionization epoch If the intergalactic medium were suddenly reionized at a redshift $`z`$, then the optical depth to electron scattering would be $`0.02h^1\left((1+z)/10\right)^{3/2}\left(\mathrm{\Omega }_bh^2/0.02\right)`$ (generalisation to more realistic scenarios of gradual reionization is straightforward). Even when this optical depth is far below unity, the ionized gas constitutes a ‘fog’– a partially opaque ‘screen’ – that attenuates the fluctuations imprinted at the recombination era; the fraction of photons that are scattered at $`z_i`$ then manifest a different pattern of fluctuations, characteristically on larger angular scales. This optical depth is consequently one of the parameters that can in principle be determined from CMB anisotropy measurements (Zaldarriaga, Spergel and Seljak 1997). It is feasible to detect a value as small as 0.1 – polarization measurements may allow even greater precision, since the scattered component would imprint polarization on angular scales of a few degrees, which would be absent from the Sachs-Wolfe fluctuations on that angular scale originating at $`t_{rec}`$. There are two effects that could introduce secondary fluctuations on small angular scales. First, the ionization may be patchy on a large enough scale for irregularities in the ‘screen’ to imprint extra angular structure on the radiation shining through from the ‘last scattering surface at the recombination epoch’. Second, the fluctuations may have large enough amplitudes for second-order effects to induce perturbations. (Hu, 1999) ### 4.2 21 cm emission, absorption and tomography The 21 cm line of HI at redshift $`z`$ would contribute to the background spectrum at a wavelength of $`21(1+z)`$ cm. This contribution depends on the spin temperature $`T_s`$ and the CMB temperature $`T_{bb}`$. It amounts to a brightness temperature of only $`0.01h^1(\mathrm{\Omega }_bh^2/0.02)((1+z)/10)^{1/2}(T_sT_{bb})/T_s\mathrm{K}`$ – very small compared with the 2.7K of the present CMB; and even smaller compared to the galactic synchrotron radiation that swamps the CMB, even at high galactic latitudes, at the long wavelengths where high-$`z`$ HI should show up. Nonetheless, inhomogeneities in the HI may be detectable because they would give rise not only to angular fluctuations but also to spectral structure. (Madau, Meiksin and Rees 1997, Tozzi et al. 1999) If the same strip of sky were scanned at two radio frequencies differing by (say) 1 MHz, the temperature fluctuations due to the CMB itself, to galactic thermal and synchrotron backgrounds, and to discrete sources would track each other closely. Contrariwise, there would be no correlation between the 21 cm contributions, because the two frequencies would be probing ‘shells’ in redshift space whose radial separation would exceed the correlation length. It may consequently be feasible to distinguish the 21 cm background, utilizing a radio telescope with large collecting area. The fact that line radiation allows 3-dimensional tomography of the high-$`z`$ HI renders this a specially interesting technique. For the 21 cm contribution to be observable, the spin temperature $`T_s`$ must of course differ from $`T_{bb}`$. The HI would be detected in absorption or in emission depending on whether $`T_s`$ is lower or higher than $`T_{bb}`$. During the ‘dark age’ the hyperfine levels of HI are affected by the microwave background itself, and also by collisional processes. $`T_s`$ will therefore be a weighted mean of the CMB and gas temperatures. Since the diffuse gas is then cooler than the radiation (having expanded adiabatically since it decoupled from the radiation), collisions would tend to lower $`T_s`$ below $`T_{bb}`$ , so that the 21 cm line would appear as an absorption feature, even in the CMB. At the low densities of the IGM, collisions are however ineffectual in lowering $`T_s`$ substantially below $`T_{bb}`$ (Scott and Rees 1990). When the first UV sources turn on, Lyman alpha (whose profile is itself controlled by the kinetic temperature) provides a more effective coupling between the spin temperature and the kinetic temperature. If Lyman alpha radiation penetrates the HI without heating it, it can actually lower the spin temperature so that the 21 cm line becomes a stronger absorption feature. However, whatever objects generate the Lyman alpha emission would also provide a heat input, which would soon raise $`T_s`$ above $`T_{bb}`$. When the kinetic temperature rises above $`T_{bb}`$, the 21 cm feature appears in emission. The kinetic temperature can rise due to the weak shocking and adiabatic compression that accompanies the emergence of the first (very small scale) non-linear structure (cf section 2). When photoionization starts, there will also, around each HII domain, be a zone of predominantly neutral hydrogen that has been heated by hard UV or X-ray photons (Tozzi et al. (1999). This latter heat input would be more important if the first UV sources emitted radiation with a power-law (rather than just exponential) component. In principle, one might be able to detect incipient large-scale structure, even when still in the linear regime, because it leads to variations in the column density of HI, per unit redshift interval, along different lines of sight (Scott and Rees (1990)). Because the signal is so weak, there is little prospect of detecting high-$`z`$ 21 cm emission unless it displays structure on (comoving) scales of several Mpc (corresponding to angular scales of several arc minutes) According to CDM-type models, the gas is likely to have been already ionized, predominantly by numerous ionizing sources each of sub-galactic scale, before such large structures become conspicuous. On the other hand, if the primordial gas were heated by widely-spaced quasar-level sources, each of these would be surrounded by a shell that could feasibly be revealed by 21cm tomography using, for instance, the new Giant Meter Wave Telescope (GMRT) (Swarup (1994)). With luck, effects of this kind may be detectable. Otherwise, they will have to await next-generation instruments such as the Square-Kilometer Array. ## 5 VERY DISTANT SUPERNOVAE <br>(AND PERHAPS GAMMA-RAY BURSTS) ### 5.1 The supernova rate at high redshifts If the reheating and ionization were due to OB stars, it is straightforward to calculate how many supernovae would have gone off, in each comoving volume, as a direct consequence of this output of UV, also how many supernovae would be implicated in producing the heavy elements detected in quasar absorption lines: there would be one, or maybe several, per year in each square arc minute of sky (Miralda-Escudé and Rees 1997). The precise number depends partly on the redshift and the cosmological model, but also on the uncertainties about the UV background, and about the actual high-$`z`$ abundance of heavy elements. These high-$`z`$ supernovae would be primarily of Type 2. The typical observed light curve has a flat maximum lasting 80 days. One would therefore (taking the time dilation into account) expect each supernova to be near its maximum for nearly a year. It is possible that the explosions proceed differently when the stellar envelope is essentially metal-free, yielding different light curves, so any estimates of detectability are tentative. However, taking a standard Type 2 light curve (which may of course be pessimistic), one calculates that these objects should be approximately 27th magnitude in J and K bands even out beyond $`z=5`$. The detection of such objects would be an easy task with the NGST (Stockman 1998). With existing facilities it is marginal. The best hope would be that observations of clusters of galaxies might serendipitously reveal a magnified gravitationally-lensed image from far behind the cluster. The first supernovae may be important for another reason: they may generate the first cosmic magnetic fields. Mass loss (via winds or supernovae permeated by magnetic flux) would disperse magnetic flux along with the heavy elements. The ubiquity of heavy elements in the Lyman alpha forest indicates that there has been widespread diffusion from the sites of these early supernovae, and the magnetic flux could have diffused in the same way. This flux, stretched and sheared by bulk motions, can be the ‘seed’ for the later amplification processes that generate the larger-scale fields pervading disc galaxies. ### 5.2 Gamma ray bursts: the most luminous known cosmic objects Some subset of massive stars may give rise to gamma-ray bursts. It may indeed turn out that all the long-duration bursts detected by Beppo-SAX involve some supernova-type event, and that the shorter bursts (maybe less highly beamed) are caused by compact binary coalescence at more modest redshifts. Bursts have already been detected out to $`z=3.4`$; their optical afterglows are 100 times brighter than supernovae. Prompt optical emission concurrent with the 10-100 seconds of the burst itself (observed in one case so far, but expected in others) is more luminous by a further factor 100. Gamma-ray bursts are, however, far rarer than supernovae – even though the afterglow rate could exceed that of the bursts themselves if the gamma rays were more narrowly beamed than the slower-moving ejecta that cause the afterglow. Detection of ultra-luminous optical emission from bursts beyond $`z=5`$ would offer a marvellous opportunity to obtain a high-resolution spectrum of intervening absorption features. (Lamb and Reichart 1999) ## 6 WHERE ARE THE OLDEST (AND THE <br>EXTREME METAL-POOR) STARS? The efficiency of early mixing is important for the interpretation of stars in our own galaxy that have ultra-low metallicity – lower than the mean metallicity of $`10^210^3`$ times solar that is likely to have been generated in association with the UV background at $`z>5`$. For a comprehensive review of what is known about such stars, see Beers (1999). If the heavy elements were efficiently mixed, then these stars would themselves need to have formed before galaxies were assembled. The mixing, however, is unlikely to operate on scales as large as a protogalaxy – if it did, the requisite bulk flow speeds would be so large that they would completely change the way in which galaxies assembled, and would certainly need to be incorporated in simulations of the Lyman alpha forest. As White and Springel (1999) have recently emphasised, it is important to distinguish between the first stars and the most metal-poor stars. The former would form in high-sigma peaks that would be correlated owing to biasing, and which would preferentially lie within overdensities on galactic scales. These stars would therefore now be found within galactic bulges. However, most of the metal-poor stars could form later. They would now be in the halos, of galaxies, though they would not have such an extended distribution as the dark matter. This is because they would form in subgalaxies that would tend, during the subsequent mergers, to sink via dynamical friction towards the centres of the merged systems. There would nevertheless be a correlation between metallicity, age and kinematics within the Galactic Halo. This is a project where NGST could be crucial, especially if it allowed detection of halo stars in other nearby galaxies. The number of such stars depends on the IMF. If this were flat, there would be fewer low-mass stars formed concurrently with those that produced the UV background. If, on the other hand, the IMF were initially steep, there could in principle be a lot of very low mass (MACHO) objects produced at high redshift, many of which would end up in the halos of galaxies like our own. ## 7 SUMMARY Perhaps only 5 percent of star formation occurred before $`z=5`$ (the proportion could be higher if most of their light were reprocessed by dust). But these early stars were important: they generated the first UV and the first heavy elements; they provided the backdrop for the later formation of big galaxies and larger-scale structure. Large-scale structure may be elucidated within the next decade, by ambitious surveys (2-degree field and Sloan) and studies of CMB anisotropies; as will be the evolution of galaxies and their morphology. The later talks in this conference will highlight the exciting progress and prospects in this subject. But despite this progress, we shall, for a long time, confront uncertainty about the efficiency and modes of star formation in early structures on subgalactic scale. I am grateful to my collaborators, especially Tom Abel, Zoltan Haiman, Martin Haehnelt, Avi Loeb, Jordi Miralda-Escudé, Piero Madau, Avery Meiksin, and Paulo Tozzi, for discussion of the topics described here. I am also grateful to the Royal Society for support.
no-problem/9912/gr-qc9912072.html
ar5iv
text
# References On de Sitter deflationary cosmology from the spin-torsion primordial fluctuations and COBE data L.C. Garcia de Andrade<sup>1</sup><sup>1</sup>1Departamento de Fisica Teorica,Instituto de Física , UERJ, Rua São francisco Xavier 524, Rio de Janeiro,CEP:20550-013, Brasil.e-mail:garcia@dft.if.uerj.br. Abstract Fluctuations on de Sitter solution of Einstein-Cartan field equations are obtained in terms of the matter density primordial density fluctuations and spin-torsion density and matter density fluctuations obtained from COBE data. Einstein-de Sitter solution is shown to be unstable even in the absence of torsion.The spin-torsion density fluctuation to generate a deflationary phase is computed from the COBE data. Recently D.Palle has computed the primordial matter density fluctuations from COBE satellite data.More recently I have been extended Palle’s work to include the dilaton fields .Also more recently I have considered a mixed inflation model with spin-driven inflation and inflaton fields where the spin-torsion density have been obtained from the COBE data on the temperature fluctuations.However in these last attempts no consideration was given on the spinning fluid and torsion was considered just coming from the density of inflaton fields which of course could be considered just in the case of massless neutrinos.Earlier Maroto and Shapiro have discussed the de Sitter metric fluctuations and showed that in the case with dilatons and torsion the higher-order gravity the stability of de Sitter solutions depends on the parametrization and dimension,but that for the given dimension one can always choose parametrization in such a way that the solutions are unstable.In this letter we show that starting from the Einstein-Cartan equations as given in Gasperini for a four-dimension spacetime with spin-torsion density the de Sitter solutions are also unstable for large values of time.Of course one should remind that Maroto-Shapiro solutions does not possess spin but are based on a string type higher order gravity where torsion enters like in Ramond action.Let us start from the Gasperini form of the Einstein-Cartan equations for the spin-torsion density $$H^2=\frac{8\pi G}{3}(\rho 2\pi G\sigma ^2)$$ (1) and $$\dot{H}+H^2=\frac{4\pi G}{3}(\rho +3p8\pi G\sigma ^2)$$ (2) where $`\frac{\ddot{a}}{a}=H(t)`$ where $`H(t)`$ is the Hubble parameter.Following the linear perturbation method in cosmological models as described in Peebles we have $$H(t,r)=H(t)[1+\alpha (r,t)]$$ (3) where $`\alpha =\frac{\delta H}{H}`$ is the de Sitter metric density fluctuations where the Friedmann metric is reads $$ds^2=dt^2a^2(dx^2+dy^2+dz^2)$$ (4) Also the matter density is given by $$\rho (r,t)=\rho (t)[1+\beta (r,t)]$$ (5) and $$\sigma (r,t)=\sigma (t)[1+\gamma (r,t)]$$ (6) whhere $`\beta =\frac{\delta \rho }{\rho }`$ is the matter density fluctuation which is approximately $`10^5`$ as given by COBE data and $`\gamma (r,t)=\frac{\delta \sigma }{\sigma }`$ where $`\sigma `$ is the spin-torsion density.Substitution of these equations into the Gasperini-Einstein -Cartan equations above in the simple case where the pressure $`p`$ vanishes (dust) we obtain $$\dot{\alpha }=\frac{\dot{H}}{H}+\frac{8\pi G}{3H}[\rho \beta 16\pi G\sigma \gamma ]$$ (7) This last equation can be integrated to $$\alpha =1+lnH+8\pi G[\frac{\rho \beta dt}{H}16\pi G\frac{\sigma \gamma dt}{H}]$$ (8) When the variation of mass density and spin density are small with respect to time they can be taken as approximatly constant and we are able to perform the integration for de Sitter metric $`(H=H_0=constant)`$ as $$\alpha =1+lnH_0+\frac{8\pi G}{3H_0}t[\rho _0\beta 16\pi G\sigma _0\gamma ]$$ (9) which shows clearly that the de Sitter solution to Einstein-Cartan equations is unstable and can be computed in terms of the matter and spin-torsion densities.Let us now compute the spin-torsion fluctuation from expression (9) necessary to produce deflation $`(\frac{\dot{a}}{a}<0)`$.By making use of the mean cosmological matter density $`\rho _0=10^{31}g.cm^3`$,the spin-torsion density computed in (7) $`\sigma _0=10^{18}`$ and the COBE data for the matter density fluctuation in the Universe along with the condition of small density perturbations $`\alpha <<1`$ in formula (9) after a straightforward algebra one obtains $$\frac{\delta \sigma }{\sigma }=10^{54}cgsunits$$ (10) This result shows that a slight fluctuation in the spin-torsion density maybe able to produce a deflationary phase in the de Sitter Universe.If no deflationary phases are not found in the universe it may happen the torsion is not necessary to describe spin and this model is wrong or that deflationary phase have been not yet observed in the Universe. One may rewrite expression (2) as $$(\frac{\ddot{a}}{a}<0)=\frac{\dot{a}}{a^2}\frac{4\pi G}{3}(\rho 8\pi G\sigma ^2)$$ (11) Since the other deflation condition is $`(\dot{a}<0)`$ one obtains from equation (11) a constraint on the matter density given by $$\rho <8\pi G\sigma ^2$$ (12) Which clearly shows that the de Sitter deflation occurs on a spin-torsion dominated phase of the Universe.This situation may lead to violations which may lead to the presence of wormholes or even dark matter. Acknowledgements I would like to thank Professor Ilya Shapiro and Prof.Rudnei Ramos for helpful discussions on the subject of this paper.Financial support from CNPq. is gratefully ackowledged.
no-problem/9912/cond-mat9912119.html
ar5iv
text
# Twist and writhe dynamics of stiff filaments ## Abstract This letter considers the dynamics of a stiff filament, in particular the coupling of twist and bend via writhe. The time dependence of the writhe of a filament is $`W_r^2Lt^{1/4}`$ for a linear filament and $`W_r^2t^{1/2}/L`$ for a curved filament. Simulations are used to study the relative importance of crankshaft motion and tube like motion in twist dynamics. Fuller’s theorem, and its relation with the Berry phase, is reconsidered for open filaments. DNA and other stiff polymer systems such as actin filaments are of interest both for their intrinsic biological importance, but also due to the fact that micro-manipulation techniques give a very detailed vision of their properties, allowing one to study fundamental processes in polymer statics and dynamics. Recent theoretical and experimental work on DNA force-extension curves has shown the importance of the static coupling between bend and twist in stiff polymers . Non-trivial twist-writhe correlations in flexible polymers have been recently found in a simple dynamic lattice model in which linking number is conserved . Here I examine the dynamics of twist and writhe fluctuations in order to clarify the driving forces and dissipative processes important in the motion of filaments. I use a combination of scaling arguments and simulations; the numerical studies are complementary to recent work which considers the zero temperature motion of stiff filaments via integration of the continuum differential equations. Here I study Brownian dynamics of stiff filaments and introduce a generalized bead-spring model suitable for studying twist and writhe dynamics of polymers. There are two physical processes which can lead to the rotation of the end of a filament about its local tangent. The two mechanisms of spinning are twist and writhe. Twist corresponds to excitation of internal torsional degrees of freedom of the filament. Writhing is due to the three dimensional geometry of bending of the filament in space. We shall reserve the word “spinning” for this process of end rotation and denote the spinning angle by $`\mathrm{\Psi }`$. The two contributions are additive so that we can write $`\mathrm{\Psi }=\varphi +W_r`$, with $`\varphi `$ the contribution from internal twisting and $`W_r`$ the contribution from the geometry of the path in space. Excitations of internal twist degrees of freedom are well understood and have been observed in DNA and actin via depolarized light scattering experiments. The writhing of a filament is best expressed with the help of Fuller’s theorem, . Consider a filament (with boundary conditions such that the two end tangents are maintained parallel) which is bent into a non planar curve. The tangent to the filament t(s) sweeps out a curve on the unit sphere, $`𝒮_2`$ as a function of the internal coordinate $`s`$. It can be shown, , that the contribution to the spinning due to the writhe is just the area, $`A_\mathrm{\Omega }`$, enclosed by $`𝐭(s)`$ on $`𝒮_2`$. Let us assume that we are working with uniform filaments with circular cross sections then the two process are controlled by two independent elastic constants, the torsional stiffness, $`K`$ and the bending stiffness $`\kappa `$. If we work in units such that $`k_BT=1`$ these two elastic constants have the dimensions of lengths, and are indeed the persistence lengths for static torsional and bending correlations of the filament. The dynamics obeyed by the torsional and bending modes are, however, different which leads to distinct contributions to $`\mathrm{\Psi }`$ as we shall now demonstrate. The torsional motion of a straight filament obeys a Langevin equation $$a^2\frac{\varphi }{t}=K\frac{^2\varphi }{s^2}+\xi (t,s)$$ (1) where the angle $`\varphi (s,t)`$ is the local rotation of the filament in the laboratory frame, $`\xi `$ is the thermal noise, $`a`$ is comparable to the radius of the filament, (we neglect all pre-factors of order unity and use units in which $`\eta =k_BT=1`$, in this system of units time has the dimensions of a volume, so that $`1s4\mu m^3`$). The solution of eq. (1) is well known: It is characterized by dynamic correlations of the angle which diffuse along the filament according to the law $`l_{tw}^2tK/a^2`$. A monomer on the filament rotates by an angle $`\varphi ^2\sqrt{t/Ka^2}`$, if $`l_{tw}<L`$. When $`l_{tw}>L`$ the twisting motion is sensitive to the boundary conditions on the filament; for free boundary conditions the filament rotates freely and $`\varphi _{free}^2t/La^2`$. If one end is held at a fixed angle the other has an amplitude of rotation which saturates to $`\varphi _{hold}^2L/K`$. Let us now calculate the writhing contributions to the spinning. For a short filament $`(L/\kappa <1)`$ oriented along the z-axis the area enclosed by the curve $`𝐭(s)`$ is given by $`A_\mathrm{\Omega }=1/2e_z.(\dot{𝐭}𝐭)ds.`$ The writhe of a filament is signed and averages to zero for a polymer at thermal equilibrium. We shall consider the statistics and dynamics of $`A_\mathrm{\Omega }(t)A_\mathrm{\Omega }(t^{}).`$ The linearized bending modes of the filament obey a Langevin equation $$\frac{𝐫_{}}{t}=\kappa \frac{^4𝐫_{}}{s^4}+𝐟_{}(t,s)$$ (2) with $`𝐫_{}`$ the transverse fluctuations of the filaments and $`𝐟_{}`$ the Brownian noise. Using eq. (2) we find that $`𝐭_i,(q,0)𝐭_j(q,t)=\delta _{i,j}\mathrm{exp}(\kappa q^4t)/\kappa q^2`$ with $`\{i,j\}=\{x,y\}`$. The fourth order correlation function in $`A_\mathrm{\Omega }(t)A_\mathrm{\Omega }(t^{})`$ can be expanded using Wick’s theorem: A short calculation shows that $$A_\mathrm{\Omega }^2=W_r^2L^2/\kappa ^2.$$ (3) and that $$(W_r(t)W_r(0))^2l_1(t)L/\kappa ^2t^{1/4}.$$ (4) with $`l_1(t)(\kappa t)^{1/4}`$ the characteristic length scale in solutions of eq.(2). We note that the same result can be found in a scaling approach using the following argument: The path $`𝐭(s)`$ on $`𝒮_2`$ is a Gaussian random walk ; On a time scale $`t`$ each section of length $`l_1`$ of the filament re-equilibrates a random walk with radius of gyration on $`𝒮_2`$ of $`r_\mathrm{\Omega }^2l_1/\kappa `$ and area $`A_1\pm l_1/\kappa `$. There are $`N_1=L/l_1`$ dynamically independent contributions to the writhe variation giving $`\mathrm{\Delta }W_r^2N_1A_1^2=l_1(t)L/\kappa ^2`$ as found above. This argument shows in addition that the scaling law eq. (4) is valid at short times even for filaments for which $`L>\kappa `$. What influence does this writhe fluctuation have on the global motion of a filament. To answer this question consider two extreme, non-physical, cases before coming back to the typical experimental situation. Consider, firstly, the case $`K/\kappa 1`$; spinning of the end of a polymer can only occur via writhe, the twist degrees of freedom are frozen out. The derivation of eq. (4) has neglected rotational friction: It has been derived using a description of only the transverse motions of the polymer. It can only be valid when the rotational friction of a filament is so small that spinning driven by the writhe is able to relax without build up of torsional stress. Similar remarks have recently been made in the propagation of tensional stress in stiff polymers . In this case simple arguments allow one to deduce the validity of the approximation made by balancing the driven motion due to the transverse fluctuations against any additional sources of friction. We shall now apply the same argument to the torsional motion, before checking the results numerically. For the rotational friction to be negligible the free rotational diffusion of a section of filament of length $`L`$ must be faster faster than the driven writhing motion, implying $`\varphi _{free}^2>\mathrm{\Delta }W_r^2`$ or $`t/La^2>l_1(t)L/\kappa ^2`$. For filaments which do not satisfy this criterion, i.e. when $`L>l_{W_r}(t)=\sqrt{l_1^3\kappa /a^2}`$, the torsional stress has not had time to propagate between the two ends of the filament and the law (4) does not apply. If we observe now the end of a filament for times shorter than the torsional equilibration time the spinning is due to an end section of length $`l_{W_r}`$ and we should substitute this effective length in eq. (4) to calculate the end motion $`\mathrm{\Delta }W_r^2l_1^{5/2}/a\kappa ^{3/2}t^{5/8}.`$ It is only after the propagation of the torsional fluctuations over the whole length of the filament that (4) becomes true. We thus hypothesize a scaling behavior for the spinning of a filament with $`K/\kappa 1`$ $$\mathrm{\Psi }^2=\frac{Lt^{1/4}}{\kappa ^{7/4}}𝒬\left(\frac{t\kappa ^{7/3}}{a^{8/3}L^{8/3}}\right),$$ (5) with $`𝒬(x)x^{3/8}`$ for $`x`$ small and $`𝒬(x)1`$ for $`x`$ large. A second extreme case is $`K=0`$, as is for instance the case for many simple numerical bead-models of worm-like chains. In this case there is no need for the beads to rotate to follow writhe, the writhe fluctuations are absorbed by the internal twist degree of freedom without cost. The writhe does not lead to spinning; we do expect however that there are strong dynamic (anti)correlations between twist and writhe over the length of the filament. We conclude that the law (4) is valid for all times, it has no consequence however on the rotational dynamics of a monomer on the chain that could be detected, for instance, in a scattering experiment. Let us now turn to the physical case, $`K/\kappa 1`$. From the expressions for $`l_{W_r}`$ and $`l_{tw}`$ we find that $`l_{tw}/l_{W_r}=\sqrt{l_1/\kappa }1`$ at short times. On the length scale $`l_{tw}`$ the twist field and the writhe of the filament are in equilibrium and we can add the fluctuations of the twist and writhe degrees of freedom. Beyond $`l_{tw}`$ the successive section of the polymer are dynamically decoupled: a writhe fluctuation can not be transmitted faster than the signal transmitted by $`l_{tw}`$ and the second scenario becomes valid with $`l_{tw}`$ playing the role of a bead size. Beyond $`l_{tw}`$ the filament can writhe freely without any local consequence on the dynamics of the chain. Thus the experimental consequences of the writhe fluctuations are probably negligible for times such that $`l_{tw}<L`$. When $`l_{tw}`$ reaches $`L`$ the amplitude of twist fluctuations saturates, while the dynamics of the slower bending modes continues to evolve with $`\mathrm{\Psi }`$ varying as eq. (4) until $`l_1(t)=L`$. These additional fluctuations could be observed experimentally. These arguments have been checked by simulations on a discretized bead-spring model: Take $`N+1`$ spherical beads of diameter $`a`$ connected by stiff harmonic springs. Each bead is characterized by its position $`𝐫`$ and a triad of orthonormal vectors. $`𝐌=\{𝐛,𝐧,𝐭\}`$. The vector $`𝐭`$ is an approximation to the local tangent to the filament at $`𝐫`$. $`\{𝐧,𝐛\}`$ span the normal space of the filament; $`𝐧`$ could describe the position of some feature, such as the large groove in DNA on the surface of the molecule. The springs are not connected to the centers of the beads rather a bead $`i`$ is linked to its neighbors by connections situated on its surface at $`\{𝐫_ia𝐭_i/2,𝐫_i+a𝐭_i/2\}`$. The energy is $`E`$ $`=`$ $`B/2{\displaystyle \underset{i}{}}(𝐫_i𝐫_{i+1}a𝐭_i/2+a𝐭_{i+1}/2)^2`$ (6) $`+`$ $`\kappa /2a{\displaystyle \underset{i}{}}(1𝐭_i𝐭_{i+1})`$ (7) $`+`$ $`K/4a{\displaystyle \underset{i}{}}(1𝐧_i𝐧_{i+1}𝐛_i𝐛_{i+1}+𝐭_i𝐭_{i+1})`$ (8) The first term imposes the linear topology of the filament; when $`B`$ is large the curvilinear length of the filament is $`L=Na`$. In the simulations we are only interested in the limit of large $`B`$. The second term is a bending energy. The third term is the torsional energy. I integrate the equations of motion for the position and orientation adding friction and thermal noise coupled to the translational and angular velocities, effectively leading to Brownian dynamics. Since we are particularly interested in the writhe dynamics I use boundary conditions where the positions of the ends of the filament are free to move but the end tangents are constrained via external torques, so that the writhe can be calculated via Fuller’s result. A first series of simulations were performed to study the internal twisting dynamics of very long filaments. This simulation was performed to confirm the intuition that twist modes can transport torsional stress even in the presence of large bends in the filament. This stress is carried by rapid spinning of the filament in a slowly evolving tube, rather than the rotation of the whole filament collectively moving against solvent friction, (like the rotation of a rigid crankshaft). Simulations are performed for filaments of varying length with two possible boundary conditions. The first boundary condition is that both ends are free to spin and we look for a crossover from the dynamics of internal modes described by eq. (1) to free rotation. In the second case we block the rotation of one end and check that this leads to a saturation of rotation angle. The scaling displayed by Fig. (1) shows that torsion propagates perfectly well over many persistence lengths without hindrance from the tortuosity of the path in space. Eq. (1) is valid for thermally bent filaments as well as for torsional fluctuations around a linear configuration. A second family of simulations were performed to study the writhe and twist fluctuations of filaments both with very high torsional constants to verify the predictions made for the writhe dynamics. Fig(2) shows the the scaling proposed above, eq (5), is indeed seen in the case of high torsional stiffnesses. In order to calculate the writhe of a curve via Fuller’s theorem we have until now considered paths $`𝐭(s)`$ which are closed on $`𝒮_2`$. In general this is most inconvenient: there are many experimental situations where one would like to compare the spinning of filaments oriented in an arbitrary direction. We now consider how one might generalize the idea of writhe and spinning to filaments with arbitrary open boundary conditions. In general it is impossible to compare the two rotation frames $`𝐌(L)`$ and $`𝐌(0)`$. The main ambiguity comes from the fact that rotations are non-commutative. However for filaments for which $`L/\kappa 1`$ the non-commutative nature of the rotations is a higher order correction and we can consider that in going along the filament oriented along $`z`$ we have a rotation which can be unambiguously decomposed into its three Cartesian coordinates $`(\mathrm{\Omega }_x,\mathrm{\Omega }_y,\mathrm{\Omega }_z)`$. We shall proceed by studying equations $`𝐌(s)/s=𝐎(s)𝐌(s)`$ where $`𝐎_{i,j}=ϵ_{ijk}\omega ^k`$ with $`\omega ^k=dt_k/ds`$ for $`k=\{x,y\}`$. The angular velocities $`\omega ^x`$ and $`\omega ^y`$ correspond to bending the filament, $`\omega ^z=0`$ in the absence of twist. We iteratively integrate this equation. The lowest order contribution to rotations about the z-axis comes in second order. $$\mathrm{\Omega }_z=\frac{1}{2}_0^L𝑑s𝑑s^{}\theta (ss^{})(\omega _s^x\omega _s^{}^y\omega _s^{}^x\omega _s^y).$$ (9) Integrating by parts to transform the $`\theta `$-function into a $`\delta `$-function gives $$\mathrm{\Omega }_z=\frac{1}{2}e_z.\left(_0^L(\dot{𝐭}𝐭)𝑑s+𝐭(0)𝐭(L)\right)$$ (10) For a closed curve on $`𝒮_2`$ the first term is the area enclosed by the curve $`𝐭(s)`$ on a patch of a sphere in agreement with Fuller’s theorem. The second, boundary, term present when the curve is open corresponds to closing the path by a geodesic from $`𝐭(L)`$ to $`𝐭(0)`$. Thus we have a simple generalization for Fuller’s theorem valid for short chains allowing one to calculate the writhe state of a polymer with fluctuating boundary conditions. We remark that Fuller’s theorem is closely related to Berry’s phase in experiments on polarized light transmission along bent optical fibers . The vector $`𝐭(s)`$ corresponds to the local tangent to the fiber, while the vectors, $`\{𝐧,𝐛\}`$ transform in exactly the same manner as the plane of polarization, via parallel transport on $`𝒮_2`$. The problem of closing paths in $`𝒮_2`$ has close analogies in the treatment of Berry’s phase in non cyclic Hamiltonians. In quantum and optical systems interference phenomena allow one to define the relative phase of a system even if the evolution has not been cyclic via the Pancharatnam connection. This convention also closes an open trajectory again via a geodesic; in wave physics this convention is even generally applicable and is not limited to small paths in the appropriate projective Hilbert space. All the above discussion has been for a filament for which the round state is linear. Let us repeat the arguments leading to (4) for short bent filaments to study the writhe fluctuations of short curved DNA sections. The pre-existing bend considerably modifies the arguments. Consider a section of filament which bends an angle $`\mathrm{\Phi }`$ at zero temperature due to intrinsic curvature. At zero temperature the bent loop has a tangent map which maps to the equator of $`𝒮_2`$. Under the map to $`𝒮_2`$ a distance of $`s`$ in real space becomes $`\mathrm{\Phi }s/L`$. At finite temperature there is competition between this constant drift and the thermal agitation which leads to a Gaussian random walk on $`𝒮_2`$. Let us now consider the dynamics in a scaling picture: For the very shortest times only small scale structure is changing:- the writhe dynamics reduces to the case discussed above. At longer times the stretched path moves collectively:- each section of length $`\delta _1=\mathrm{\Phi }l_1/L`$ on $`𝒮_2`$ moves up or down by $`r_\mathrm{\Omega }=\sqrt{l_1/\kappa }`$. Counting $`N_1=L/l_1`$ dynamically independent sections $$\mathrm{\Delta }W_r^2N_1(\delta _1r_\mathrm{\Omega })^2=\mathrm{\Phi }^2l_1^2/\kappa Lt^{1/2}.$$ (11) The exponent for the writhe fluctuations has changed from 1/4 to 1/2 due to the stretching of the configuration in $`𝒮_2`$. This prediction is tested in fig. (2), where we have added a spontaneous bending energy, in $`\alpha \mathrm{\Sigma }_i𝐧_𝐢.𝐭_{𝐢+\mathrm{𝟏}}`$ to curve the filament. We now see two distinct regimes in $`\mathrm{\Psi }^2t^{1/2}`$ due to the intrinsic twist dynamics and then the writhe dynamics, the writhing contribution is also much enhanced over its value for linear filaments. It would also be interesting to study the case of closed loops, but the simple scaling argument used here becomes considerably more complicated due to the non-local integral constraint $`𝐭(s)𝑑s=0`$ coming from closure. This paper has studied the short time regime dominated by the bend and twist rigidities, rather than the very long time dynamics where crossovers to Zimm or Rouse dynamics occur. Our arguments have been for the simplest model of a stiff polymer, of uniform cross section and without disorder. The Marko-Siggia energy function already has rich static behavior and one would anticipate that the cross terms in this Hamiltonian could increase the importance of the dynamically effects discussed here. Similarly disorder in the ground state is expected to dramatically modify the dynamics, by giving a preformed writhe on $`𝒮_2`$ and by modifying the picture of simple spinning of the polymer in its tube due to new dissipative processes; It has recently been argued that a mixture of crankshaft and tube spinning should co-exist even in the case of weak disorder coming from sequence fluctuations in DNA . I would like to thank A. Ajdari, R. Everaers, F.Julicher, P. Olmstead, C. Wiggins and T.A. Witten for discussions.e
no-problem/9912/cond-mat9912223.html
ar5iv
text
# Quantitative investigation of the mean-field scenario for the structural glass transition from a schematic mode-coupling analysis of experimental data ## Abstract A quantitative application to real supercooled liquids of the mean-field scenario for the glass transition ($`T_g`$) is proposed. This scenario, based on an analogy with spin-glass models, suggests a unified picture of the mode-coupling dynamical singularity ($`T_c`$) and of the entropy crisis at the Kauzmann temperature ($`T_K`$), with $`T_c>T_g>T_K`$. Fitting a simple set of mode-coupling equations to experimental light-scattering spectra of two fragile liquids and deriving the equivalent spin-glass model, we can estimate not only $`T_c`$, but also the static transition temperature $`T_s`$ corresponding supposedly to $`T_K`$. For the models and systems considered here, $`T_s`$ is always found above $`T_g`$, in the fluid phase. A comparison with recent theoretical calculations shows that this overestimation of the ability of a liquid to form a glass seems to be a generic feature of the mean-field approach. Despite considerable experimental and theoretical efforts, understanding the dynamics of supercooled liquids and the related phenomenon of glass transition remains a challenging problem of classical statistical mechanics . Recently, Kirkpatrick *et al.* conjectured that generalized spin-glass models with discontinuous one-step replica symmetry breaking (1RSB) transitions (like the spin-glass with $`p`$-spin interactions ($`p>2`$) or the $`q`$-state Potts glass ($`q>4`$)) could be relevant models for the description of the structural glass transition. Essentially two points substantiate this analogy. Firstly, when considered in the mean-field limit where the interactions between spins have infinite range, the generalized spin-glasses display a (static) 1RSB transition to a spin-glass phase at a temperature $`T_s`$ that is accompanied by the vanishing of the configurational entropy similar to the entropy crisis at $`T_K`$ hypothesized by Kauzmann for glassforming liquids . Secondly, in the same mean-field limit the study of the Langevin dynamics of such models shows that a dynamical transition takes place at a temperature $`T_d`$ greater than $`T_s`$. Above $`T_d`$, the time evolution of the spin correlation function is given by a non-linear equation of the same type as those occurring in the ideal mode-coupling theory (MCT) for the glass transition of simple liquids, where they describe the time evolution of density fluctuations , and $`T_d`$ coincides with what is called the critical temperature $`T_c`$ in the context of MCT. Below $`T_d`$, the systems display a nontrivial free-energy landscape that in finite-range models could lead to slow activated dynamics, as described for instance by the Adam-Gibbs theory . Mean-field models and their extensions to finite-range systems could thus provide a consistent framework for the study of the liquid-glass transition, a framework in which the mode-coupling approach finds a natural place and that catches important aspects of the glass phenomenology. This approach has recently been very fruitful: first-principle studies based on the replica method have been proposed for simple liquid models , and computer simulation studies of the out-of-equilibrium dynamics of simple liquids show aging behaviors qualitatively similar to the one displayed by generalized spin glasses . A question remains, however, elusive. Indeed, it is known that the dynamical transition predicted by the ideal MCT is not experimentally observed: it is ’avoided’ because of ergodicity restoring processes not accounted by the theory. In general, it is thought to be replaced by a smooth crossover regime where the dynamics changes qualitatively . Because of the similarities between the ideal MCT and the dynamical aspects of the mean-field theory, the same breakdown is expected in the latter when more realistic systems with finite-range interactions are considered. Its influence on the other aspects of the mean-field picture described above is unknown, but the static transition is usually assumed to survive and the dynamical freezing is expected to occur only at the static transition . This issue clearly deserves further investigation. In this Letter, as a step in this direction, we propose a quantitative study of the above mean-field scenario for *real* supercooled liquids. By quantitative, we mean that we will extract from experimental data values for the two characteristic temperatures introduced within the mean-field approach and discuss these values in relation with known properties of the investigated systems. At present, no definite method exists for such a work. We propose here to start from a phenomenological mode-coupling approach, the so-called schematic approach to experimental data , which is found to be particularly well suited for investigating the avoidance of the dynamical transition. Then, taking advantage of the coincidence between the ideal mode-coupling equations and the dynamics of some mean-field generalized spin-glass models, the schematic calculation is rephrased in terms of an effective spin-glass hamiltonian, whose study allows us to determine, for the glassforming liquids under investigation, the location of the two transitions predicted by the mean-field theory. Schematic models are simple sets of mode-coupling equations , which have proven to be useful in testing the MCT on realistic systems. Indeed, these models can be included within a fitting procedure of experimental data *over a wide time or frequency range*, and they allow the calculation of effective mode-coupling parameters (’vertices’) describing the dynamical evolution of a liquid with varying external conditions . A major interest of these models is that they catch the universal features of the mode-coupling equations, in particular the asymptotic scaling results valid near the dynamical transition, but are not reduced to them. They make possible to overcome the difficulties and uncertainties arising from the need for corrections to the asymptotic critical predictions of the theory and to take into account the $`\alpha `$ relaxation as well as the regime of the high-frequency microscopic excitations. The price to pay is that these models are somewhat *ad hoc* and might possibly display non-generic features. We make use in this Letter of the results obtained from a previous study of the depolarized light scattering spectra of two so-called ’fragile’ glassformers, CKN and salol . Only the facts relevant to our present calculation are reviewed here and the reader is referred to the corresponding paper for details. The basis of our study is the well studied $`\mathrm{F}_{12}`$ model , defined by the following mode-coupling equation for a correlator $`\varphi _0(t)`$: $$\ddot{\varphi }_0(t)+\nu _0\dot{\varphi }_0(t)+\mathrm{\Omega }_0^2\varphi _0(t)+\mathrm{\Omega }_0^2_0^tm_0(t\tau )\dot{\varphi }_0(\tau )d\tau =0,$$ with $`m_0(t)=v_1\varphi _0(t)+v_2\varphi _0^2(t)`$, $`v_1,v_20`$. To fit the experimental light-scattering spectra over the full available frequency range, in particular to reproduce the shape of the peak in the THz domain and the extra-intensity of the $`\alpha `$-peak, we must add a second correlator $`\varphi _1`$, whose time evolution is given by a similar equation with $`m_1(t)`$ simply chosen proportional to $`m_0(t)`$. The first correlator $`\varphi _0(t)`$ accounts in an effective way for the slow modes responsible for the slowing down of the relaxation, possibly leading to a dynamical transition. The second correlator $`\varphi _1(t)`$ describes the contribution of additional degrees of freedom that are important in the light-scattering processes, but whose slow dynamics is dominated by that of $`\varphi _0(t)`$. The critical slowing down is thus totally driven by the time evolution of $`\varphi _0`$, whose variation with temperature is entirely encoded in the $`T`$-dependence of the ’vertices’ $`v_1`$ and $`v_2`$ (the other parameters are taken $`T`$-independent). Because $`\varphi _1`$ plays no role in the parametrization of the dynamical evolution, it can be discarded from the building of the effective generalized spin-glass model. The latter is obtained remarking that the long-time dynamics described by the $`\mathrm{F}_{12}`$ equation is *identical* to that of a mean-field spherical spin-glass with spins interacting via both 2-body and 3-body random terms, namely $$\beta H[𝐬]=\underset{1i_1<i_2N}{}J_{i_1i_2}s_{i_1}s_{i_2}+\underset{1i_1<i_2<i_3N}{}J_{i_1i_2i_3}s_{i_1}s_{i_2}s_{i_3},$$ where the $`s_i`$’s are spherical spin variables and the couplings $`J_{i_1i_2}`$ and $`J_{i_1i_2i_3}`$ are independent gaussian variables with zero means and variances $`\overline{(J_{i_1i_2})^2}=v_1/N`$ and $`\overline{(J_{i_1i_2i_3})^2}=2v_2/N^2`$ . A remarkable property of these systems is that the correlation function of the Hamiltonian, $$\beta ^2\overline{H[𝐬]H[𝐬^{}]}=N\left(\frac{v_1}{2}q_{𝐬,𝐬^{}}^2+\frac{v_2}{3}q_{𝐬,𝐬^{}}^3\right),$$ where $`q_{𝐬,𝐬^{}}=𝐬𝐬^{}/N=\left(_is_is_i^{}\right)/N`$ is the overlap between the spin configurations $`𝐬`$ and $`𝐬^{}`$, determines completely both their statics and dynamics . With parameters $`v_1`$ and $`v_2`$ extracted from a fit of the dynamics, we are thus able to construct an effective generalized spin-glass model allowing us to investigate the mean-field scenario for the glassforming salol and CKN. The phase diagram of the generalized spin-glass model described above is easily computed and is plotted on figure 1. The dynamical-transition line separating the ergodic domain in the vicinity of the origin from the non-ergodic one in the vicinity of infinity has two branches: $`\{v_1=1,v_21\}`$ corresponding to a continuous or type A transition and $`\{v_1=2(v_2)^{1/2}v_2,1<v_24\}`$ corresponding to a discontinuous or type B one, respectively. At the static level, by using the replica trick to average over the quenched disorder, the model is exactly solved with a 1RSB ansatz, which leads to the following free energy density $$\beta f(q,x)=\frac{v_1}{4}(1(1x)q^2)\frac{v_2}{6}(1(1x)q^3)\frac{1}{2x}\mathrm{ln}(1(1x)q)\frac{x1}{2x}\mathrm{ln}(1q),$$ where $`q`$, the mutual overlap between replicas lying in the same cluster, and $`x`$, the cluster size, are variational parameters ($`0q,x1`$), with respect to which the free energy has to be maximized. Here again, the transition line between the replica-symmetric phase at small couplings and the 1RSB phase at large couplings has two branches: a continuous 1RSB transition line coincides with the continuous dynamical transition line, whereas a discontinuous 1RSB transition line is found beyond the dynamical discontinuous transition line. It corresponds to the appearance of a non-zero solution for $`q`$ with $`x=1`$ and its equation as a parametric function of $`q`$ is given by $$v_1=2\frac{2q^23q3(1q)\mathrm{ln}(1q)}{q^2(1q)},v_2=3\frac{2qq^2+2(1q)\mathrm{ln}(1q)}{q^3(1q)}.$$ On figure 1 are also reported the effective vertices obtained from the experimental data for the two supercooled liquids. They display two different regimes with varying temperature. At higher temperatures, they vary linearly and show the apparent evolution of the liquids toward the dynamical transition expected from MCT. But, above the corresponding transition temperature, the vertices behavior changes: the transition is not reached, and the vertices follow the dynamical-transition line without crossing it. The first regime can be associated with the domain of validity of the ideal MCT, whereas the second one indicates the failure of the theory because of the putative onset of activated processes. Indeed, the MCT states that the dynamics is governed by vertices which are purely static quantities and thus change smoothly with external parameters. Even if in the case of schematic models the connection between the effective vertices and static quantities is somewhat obscured, it is reasonable to expect smooth variations of the fitted parameters with temperature. Accordingly, we concentrate in the following on the high temperature regime and its extrapolation to lower temperatures. Note that one can avoid the need for extrapolation using a low frequency cut-off for the lowest temperatures . With this method, we find the dynamical temperatures $`T_d=T_c=257\pm 5`$ K for salol and $`T_d=T_c=388\pm 5`$ K for CKN, in good agreement with previously determined values of the mode-coupling transition temperature. By construction, the mode-coupling transitions of the liquids and the dynamical transitions of the effective disordered systems are identical. We now turn to the static calculation. We are interested in the discontinuous 1RSB transition temperature, since it is thought to describe the entropy crisis associated with the resolution of the Kauzmann paradox. Indeed, as first pointed by Kauzmann, because the heat capacity of a supercooled liquid is substantially greater than that of the underlying crystalline solid, reasonable extrapolations of the liquid entropy below the glass transition seem to cross the entropy of the crystal at a non-zero temperature $`T_K`$. To solve this paradox, following Gibbs and Di Marzio , it is sometimes postulated that, below $`T_g`$, a second-order transition to an ideal glassy state should exist, at which the configurational contribution to the entropy of the liquid vanishes. This mechanism is precisely at work in the mean-field models at a discontinuous 1RSB transition. Between the dynamical and static transitions, one finds that the Gibbs measure is dominated by a number of states exponentially large in $`N`$, leading to a finite configurational entropy density (defined as the logarithm of that number of states divided by $`N`$). At the static transition, this configurational entropy density vanishes and stays zero in the low-temperature phase. From the above analysis, we find $`T_s=242\pm 5`$ K for salol and $`T_s=376\pm 5`$ K for CKN. These values, which are very close to $`T_d`$, have to be compared with the experimental calorimetric glass transition temperatures ($`T_g=220`$ K for salol, $`T_g=333`$ K for CKN) and the empirically determined Kauzmann temperatures ($`T_K=175`$ K for salol , no value for CKN because the crystalline phase is unstable): $`T_s`$ is found in both cases above $`T_g`$, *i.e.* still in the liquid phase ! All these characteristic temperatures are plotted in figure 2 in the case of salol: the total configurational entropy decreases by only $`10\%`$ between $`T_d`$ and $`T_s`$. We have investigated the effect of minor modifications of the schematic calculation on our results, namely changing the expression of the calculated susceptibility as a functional of $`\varphi _0`$ and $`\varphi _1`$ (each can enter linearly or quadratically in the susceptibility expression) and/or changing the second memory function to $`m_1(t)=r\varphi _0(t)\varphi _1(t)`$. The resulting fits are all of equally good quality, and the corresponding vertex trajectories, although slightly different, agree with the previous values for both the two transition temperatures and the vertices at the dynamical transition. As these changes merely affect $`\varphi _1`$, this consistency validates our assumption of neglecting it for building the effective model. For more drastic changes to the model, in which the $`\mathrm{F}_{12}`$ equation is replaced by another one (for instance, we tried the $`\mathrm{F}_{13}`$ and $`\mathrm{F}_{29}`$ models where $`m_0(t)=v_1\varphi _0(t)+v_3\varphi _0^3(t)`$ and $`m_0(t)=v_2\varphi _0^2(t)+v_9\varphi _0^9(t)`$, respectively), we have not been able to fit satisfactorily the experimental susceptibilities. Indeed, the fitted curves failed to reproduce the location of the susceptibility minimum, its shape, or else the position of the $`\alpha `$-peak. As these features are crucial for the characterization of the dynamics within the MCT framework, the results, in qualitative agreement with those obtained with the $`\mathrm{F}_{12}`$ model, do not appear reliable enough for quantitative purpose. The origin of this failure is in general unclear, but in some cases can be related to non-generic features of a given schematic model (for instance, the $`\mathrm{F}_{13}`$ model displays an $`\mathrm{A}_3`$ singularity very close to the calculated vertex trajectories). This limitation seems to be severe for our static calculation, as it is known from the study of mean-field models like spin-glasses with $`p`$-spin ($`p>2`$) interactions or the Potts glass , that the larger the asymptotic value of the correlation function at the dynamical transition, the larger the ratio $`T_d/T_s`$, and the $`\mathrm{F}_{12}`$ model only allows for small values of the former. But we stress that, in our model, the temperature enters in a different way, only through the effective vertices whose dependence comes out directly from the fitting procedure to dynamic light-scattering susceptibilities. There is, thus, no built-in closeness of $`T_d`$ and $`T_s`$ in our work. A related remark is the independence of our results with respect to changes in the way $`\varphi _0(t)`$ enters the calculated susceptibility, thereby showing that its infinite time limit is not a sensitive parameter in the study. Within our phenomenological implementation of the mean-field approach, we find thus that this theory seems to overestimate notably the tendency of a supercooled liquid to form a glass. This overestimation shows up at two levels ($`T_d>T_g`$ on one hand, $`T_s>T_g`$ on the other) with clearly different implications. As stated in introduction, this result is expected from the dynamical side of the theory, because of its closeness with the ideal mode-coupling theory, whose inadequacy at low temperature is well known, and of the need to take into account corrections to mean-field in finite dimension . From this point of view, this conclusion is not new. What is more unexpected is that the obtained static temperature values are clearly located in the fluid domain and do not seem to be associated to any change of behavior of the studied systems. These results are thus inconsistent with the arguments prescribing that, going beyond mean-field, the real dynamical transition should occur at $`T_s`$ for finite dimensional systems . Whether the found overestimation has to be ascribed to an inadequacy of our simple phenomenological approach or more fundamentally to the theory itself can not be answered here *a priori*. We can nevertheless remark that theoretical studies of simple liquid models based on the replica method tend to support our observations and conclusions. Indeed, one finds in general that the location of the static transition agrees well with the glass transition found by computer simulation, *i.e.* obtained with large quenching rates and on short observation times. This agreement implies thus an overestimation of the ability of the liquids to freeze into a glass on a macroscopic time scale, as found in our calculation. Concerning the closeness of the dynamical and static transitions, our results can only be compared consistently with theoretical approaches allowing one to compute the locations of both transitions within the same framework. This is the case in the papers of Ref. only, in which the transitions are found very close to each other, just as we find here. To summarize, we have investigated a potential application of the mean-field scenario for the liquid-glass transition to *real* supercooled liquids by constructing an effective generalized spin-glass model whose dynamics reproduces the standard mode-coupling equations used in the ideal MCT of glassforming liquids. We have been able to determine from experimental data for two fragile glass-formers both the dynamical and static transitions, which are found to be rather close and both located in the liquid phase. This result, in qualitative agreement with recent theoretical studies, shows that the theory apparently overestimates the ability of a supercooled liquid to freeze into a glass, at least in its simple implementation considered here. Whether this deficiency could be cured by employing a more sophisticated (but yet unknown) version of the mean-field approach to real supercooled liquids or would require a non mean-field treatment accounting explicitly for activated dynamics remains an open question. We are grateful to Prof. W. Götze and Dr. M. Fuchs for providing us the codes for some parts of the calculations. We are also indebted to Prof. H.Z. Cummins and Dr. G. Li for the use of their experimental data. Dr. G. Tarjus and Dr. J. Kurchan are particularly acknowledged for fruitful discussions.
no-problem/9912/astro-ph9912051.html
ar5iv
text
# The formation of asymmetries in Multiple Shell Planetary Nebulae due to interaction with the ISM ## 1. Introduction The existence of faint shells surrounding PNe was pointed out by Duncan (1937), and Chu, Jacoby & Arendt (1987) studied and classified a large sample of MSPNe. Since then, the use of more sensitive CCD detectors and the HST telescope have revealed that these structures are more common than was originally believed. MSPNe appeared in 24% of a complete sample of spherical and elliptical PNe in the northern hemisphere (Manchado 1996); 40% of these show asymmetries in the halo that could be related to the interaction with the ISM. Many studies have been carried out to try to establish a connection between the central star evolution and the observed shells (Stanghellini & Pascuali 1995, Trimble & Sackmann 1978, Frank, van der Venn & Balick 1994). We have approached the problem from a different angle: to find out whether the predictions of discrete enhanced mass-loss rates during the AGB for low mass stars given by Vassiliadis & Wood (1993) are able to reproduce the observed MSPNe. ### 1.1. The MSPNe We worked out numerical simulations using ZEUS-3D as a hydro-code and the stellar evolutionary models as the inner boundary conditions. We set up the time-dependent wind parameters using the models of Vassiliadis & Wood (1993) during the AGB and Vassiliadis & Wood (1994) for the post-AGB stages. During the transition time we adopted a linear interpolation between the wind values at the end of the AGB and wind values at the beginning of the post-AGB. In fig.1. we show the observed H$`\alpha `$ emission brightness profile of the PN NGC 6826 and the computed ones at different evolutionary times. The halo brightness profiles are characterized by a continuous decline in the emission and a relative maximum at the edge caused by an abrupt enhancement of the density on the leading surface of the shell. The linear size of NGC 6826 has been computed by adopting the spectroscopic distance of 2.2 Kpc given by Mendez, Herrero & Manchado (1990). A direct comparison of the observed and computed profiles shows that our simulations are able to reproduce the overall shape and size of the nebula. ### 1.2. The interaction process The time spent by the star as it evolves during the AGB has been neglected in previous works related to the study of the interaction process (Borkowski, Sarazin & Soker 1990, Soker, Borkowski & Sarazin 1991). To address this question we set up the time dependent wind parameters within a small spherical input region centered on the symmetry axis, where reflecting boundary conditions have been used. An outflow boundary condition has been set at the outer radial direction. Interaction with the ISM (Figure 2) has been studied assuming that the star is moving supersonically with 20 $`kms^1`$ across an homogeneous external medium with density 0.1 $`cm^3`$. Isothermal sound speed of the unperturbed ISM is $`c=3kms^1`$, which give us a Mach number of 7. The movement takes place perpendicular to the line of sight. The computation was performed in a 2D spherical grid with the angular coordinate ranging from 0 to 180 degrees. The interaction between the wind expelled by the star and the ISM takes place from the beginning of the evolution. A bow-shock configuration is quickly established (our Mach number, 7, is enough to form a strong shock). The compression across a strong isothermal shock depends on the upstream Mach number. To keep the gas isothermal, the internal energy has to be radiated away. This internal energy would otherwise have limited the compression. Mass-loss rate associated with the last thermal pulse interacts directly with the local unperturbed ISM, giving rise to a less dense shell than the one formed by a steady star. ## 2. Conclusions As a consequence of the evolution of a 1M from the models of Vassiladis & Wood (1993,1994) a Multiple Shell Planetary Nebulae is formed. Since our assumptions of the velocities and the ISM conditions are very conservative, we can conclude that we will see a spherical halo only if the star is at rest in relation to the ISM or if it is moving at low angles in relation to the line of sight. ## Acknowledgment We thank M. L. Norman and the laboratory for Computational Astrophysics for the use of ZEUS-3D. The work of EV and AM is supported by a grant from the Spanish DGES PB97-1435-C02-01. ## References Duncan, J. C. 1937, ApJ 86, 496 Borkowski, K. J., Sarazin, C. L., & Soker, N. 1990, ApJ 360, 173 Chu, Y.-H., Jacoby, G. H., & Arendt 1987, ApJS 64, 529 Frank,A., Van der Veen, W.E.C.J. & Balick, B. 1994 A&A 282, 554 Manchado, A. 1996, in IAU Symp. 180, Planetary Nebulae, ed. H.J. Habing & H.J.L.M. Lamers, 184 Méndez, R. H., Herrero, A., & Manchado, A. 1990,A&A 229, 152 Stanghellini, L. & Pasquali, A. 1995 ApJ 452, 286 Soker, N., Borkowski, K.J., & Sarazin, C.L. 1991, AJ 102, 1381 Trimble, V., Sackmann, I.-J. 1978, MNRAS 182,97 Vassiliadis, E., & Wood, P. 1993, ApJ 413, 641 Vassiliadis, E., & Wood, P. 1994, ApJ 92, 125
no-problem/9912/cond-mat9912253.html
ar5iv
text
# Angle-dependent magnetothermal conductivity in 𝑑-wave superconductors ## Abstract We analyse the behavior of the thermal conductivity, $`\kappa (H)`$, in the vortex state of a quasi-two-dimensional d-wave superconductor when both the heat current and the applied magnetic field are in the basal plane. At low temperature the effect of the field is accounted for in a semiclassical approximation, via a Doppler shift in the spectrum of the nodal quasiparticles. In that regime $`\kappa (H)`$ exhibits twofold oscillations as a function of the angle between the direction of the field in the plane and the direction of the heat current, in agreement with experiment. Experiments which show that the superconducting order parameter in the high-T<sub>c</sub> cuprates is $`d`$-wave are often sensitive to the surface effects, and there is still interest in the bulk probes of the symmetry of the gap. One piece of evidence for the linear nodes in the bulk comes from the verification of the universal low temperature limit of the in-plane thermal conductivity, another is based on the observed non-linear dependence of the electronic specific heat on the applied magnetic field, $`H`$, in the mixed state. The latter result is based on the observation that the properties of a $`d`$-wave superconductor in the dilute vortex regime ($`HH_{c2}`$) are determined by the near-nodal quasiparticles in the bulk. In a semiclassical treatment, the energy of quasiparticle with the momentum $`𝐤_n`$, where $`n`$ labels a node, at point r is shifted by $`\delta \omega =𝐯_s(𝐫)𝐤_n`$, where $`𝐯_s`$ is the velocity field associated with the supercurrents. In the regions of the Fermi surface near the nodes where this shift exceeds the local gap, there exist unpaired quasiparticles. Since the typical supermomentum is $`\mathrm{}/R`$ where $`R^2\mathrm{\Phi }_0/\pi H`$, and since the number of vortices $`n_vH`$, the spatially averaged density of states in a pure $`d`$-wave material varies as $`\sqrt{H}`$. If the positions of the impurities and the vortices are uncorrelated, the argument can be generalized to include the impurity scattering, which depends on the local density of states. This simple approach has worked remarkably well in describing the low-temperature thermal and transport properties of the vortex state with the field perpendicular to the CuO<sub>2</sub> layers. In particular, the increase of the $`T=0`$ limit of the in-plane thermal conductivity with $`H`$ is well described by the semiclassical theory. In relatively three-dimensional cuprates, such as YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub>, the same approach applies when the field is in the basal plane. In that case the magnitude of the Doppler shift depends on the relative orientation of the field with respect to the nodes, and the density of states in the pure limit exhibits fourfold oscillations as a function of the angle between the direction of the field and the crystalline axes. Experimental verification of this prediction has not been possible so far and is hindered by the extrinsic contributions to the specific heat and the orthorhombicity of the material. On the other hand, the angular dependence of the thermal conductivity in the vortex state has already been observed, and here we analyse it in the semiclassical framework. As in Refs. we assume a cylindrical Fermi surface and a $`d`$-wave gap, and approximate the superflow by the velocity field around a single vortex. The field and the thermal gradient are applied at angle $`\alpha `$ and $`\epsilon `$ to the $`\widehat{\mathrm{a}}`$ axis respectively. We consider the regime $`T,\gamma E_H\mathrm{\Delta }_0`$, where $`E_H(v_f/2)(\pi \mathrm{\Phi }_0\lambda _{ab}/H\lambda _c)^{1/2}`$ is the average Doppler shift, and $`\gamma `$ is the low-energy scattering rate. Following Refs. we obtain that the local change in $`\kappa `$ is $`\delta \kappa (\rho )/\kappa _{00}=E(\alpha )\mathrm{sin}^2\beta /\rho ^2`$, where $`\kappa _{00}/T=\pi N_0v_f^2/6\mathrm{\Delta }_0`$ is the universal thermal conductivity, $`N_0`$ is the normal state density of states, $`\mathrm{\Delta }_0`$ is the gap amplitude, $`v_f`$ is the Fermi velocity, $`\beta `$ is the winding angle of the vortex, $`\rho `$ is the distance from the center of the vortex normalized to $`R`$, $`E(\alpha )=(\pi E_H^2/8\mathrm{\Gamma }\mathrm{\Delta }_0)\mathrm{max}(\mathrm{sin}^2\alpha ,\mathrm{cos}^2\alpha )`$, and $`\mathrm{\Gamma }`$ is the bare scattering rate. The local $`\kappa (𝐫)`$ has to be spatially averaged to obtain the field dependence. When the heat gradient, $`T`$, is parallel to the field $`\kappa _{}(H)=\kappa (𝐫)`$, where the brackets denote the average over a unit cell of the vortex lattice. For other relative orientations of $`T`$ and H the averaging procedure is not clear; it was argued that $`\kappa _{}(H)=[(1/\kappa (𝐫))]^1`$ is appropriate for $`T𝐇`$. We therefore take here a simple approach of averaging independently the components of the heat current along and normal to the vortex, and expect that this procedure gives at least qualitatively correct results. Then the longitudinal and the Hall thermal conductivity are given by $`\kappa _1=\kappa _{}\mathrm{cos}^2(\alpha \epsilon )+\kappa _{}\mathrm{sin}^2(\alpha \epsilon )`$ and $`\kappa _2=(1/2)|(\kappa _{}\kappa _{})\mathrm{sin}2(\alpha \epsilon )|`$ respectively, with $`\kappa _{}=\kappa _{00}[1+E(\alpha )\mathrm{ln}(\mathrm{\Delta }_0/E_H)]`$ and $$\frac{\kappa _{}}{\kappa _{00}}=\left[\sqrt{1+E(\alpha )}E(\alpha )\mathrm{sinh}^1\frac{1}{\sqrt{E(\alpha )}}\right]^1.$$ (1) The result for $`\kappa _1`$ is in agreement with the twofold pattern of Ref. at $`T=0.8`$K with $`\epsilon =\pi /2`$, see Fig.1. The minima of $`\kappa _1(\alpha )`$ correspond to increased scattering by the vortices when the heat current is normal to the field. In the regime where $`TE_H`$ Yu et al. have measured $`\kappa _\pm =(\kappa _1\pm \kappa _2)/\sqrt{2}`$ with $`\epsilon =0`$, found a twofold pattern, and explained it as a consequence of Andreev reflection of quasiparticles in the presence of supercurrents; we note that the angular dependence of $`\kappa _\pm `$ is very similar to what we obtain at $`E_HT`$. In the same regime Aubin et al. observed a fourfold symmetry of $`\kappa _1(\alpha )`$, consistent with the picture in which the scattering of quasiparticle by the vortices becomes more important at high $`T`$ and this scattering has the same symmetry as the gap. The work to explain in detail the high $`T`$ dependence is in progress, and will be reported elsewhere.
no-problem/9912/astro-ph9912514.html
ar5iv
text
# Pulsar Electrodynamics ## 1. Magnetospheric Current System ### 1.1. Pulsar Wind The rotational energy of pulsars is carried off mostly by the pulsar wind. Pulsed radiation only accounts for a small fraction of the rotation power. At least for young pulsars, this idea is supported by observations of pulsar powered nebulae. The pulsar wind is predominant in the pulsar electrodynamics. However, the nature of the pulsar wind is not clear. The wind is conventionally thought to be an outflow of magnetized plasmas, which transports energy in the forms of a Poynting flux and a kinetic energy flux. Dominance of the wind may not be the case if the electromotive force of the star is marginally reduced to the voltage required for pair creation. Nevertheless, we assume dominance of the wind in the following, and this is hopefully justified for most pulsars. ### 1.2. Electric current in the magnetosphere For the axisymmetric case, if there is an outflow of energy, then a simple result is a loop current system. As shown in Figure 1 (left), the current starts from the star, goes out and returns back to the star on different field lines. A DC circuit is formed, connecting the central dynamo and the load (the wind region beyond the light cylinder). The poloidal electric field produced by the dynamo and the toroidal magnetic field by the loop current make the outward Poynting flux. The current can go away to infinity and return back from infinity. However, if some part of the current closes somewhere in the outer magnetosphere, then Poynting flux is converted into the kinetic energy of the plasma to accelerate the wind ($`𝑬𝒋>0`$). How much current closes in the wind is the issue whether kinetic dominant winds are formed or not. To my knowledge, this is still controversial. For oblique cases, a non-zero displacement current complicates the interpretation. However, it can be shown that if there is no field-aligned electric field ($`E_{}𝑬𝑩/B=0`$), the outward energy flux does require a ‘real’ loop current even with displacement current. Only for a non-vanishing field-aligned electric field ($`E_{}0`$) does the displacement current contribute to the outflow of energy, as in the case of magnetic dipole radiation (cf. the Appendix of Shibata & Hirotani, 1999). An illustrative example is the oblique force-free model without field-aligned current. In this case, there is no Poynting flux across the light cylinder (Henriksen, & Norton 1975; Beskin, Gurevich, & Istomin 1993; Mestel, Panagi, & Shibata 1999) If the potential drop associated with probable $`E_{}0`$ is much smaller than the available voltage, then in general, the ‘real current loop’ dominates in determining the rotational energy loss. As a result, the field-aligned current with an intensity of order of the Goldreich-Julian (GJ) value, is forced to run through the inner magnetosphere, connecting the generator, the neutron star, and the main load, which is the wind. This view seems to be applicable at least for young pulsars because their electromotive force is large enough to make electron-positron pairs. ## 2. Constraints for the inner magnetosphere The inner magnetosphere is inflexible: (1) the magnetic field is hardly changed by the GJ current. It is essentially current free, say, dipolar. (2) the order of strengths of various forces, i.e., pressure force $``$ gravitational force $``$ electromagnetic force, results in (i) it is the electromagnetic force that controls the particle motion; (ii) due to strong gravity, a quasi-neutral plasma cannot be supported above scale heights (a few cm). Quasi-static plasmas should be completely charge separated: plasmas are composed of either electrons only or ions/positions only; (iii) pair plasma can stay quasi-neutral only if the field-aligned electric field in it is screened out. (3) finally the smallness of particle inertia requires the ideal-MHD condition, $`𝑬+𝒗\times 𝑩/c=0`$. If this applies, the particles follow the corotation plus field-aligned motion: $`𝒗=𝒖_\mathrm{c}+\kappa 𝑩`$, where $`𝒖_\mathrm{c}=𝛀\times 𝒓`$ is the corotation velocity, and the electric field becomes the corotational electric field, $`𝑬=(1/c)𝒖_\mathrm{c}\times 𝑩𝑬_\mathrm{c}`$, which is supported by the GJ charge density $`\rho _{\mathrm{GJ}}=𝑬_\mathrm{c}/4\pi `$. Near the star, the general relativistic effect changes the form of $`\rho _{\mathrm{GJ}}`$, which in turn changes the acceleration field for a given current density. The corotational electric field has no component along the magnetic field, $`E_{}=0`$. Although the particles nearly follow the rigid field lines, ideal corotation is not realized everywhere in the inner magnetosphere. This is because the particle supply does not lead to the GJ density everywhere. Even a small deviation of the real charge density from the GJ charge density creates a field-aligned electric field. The unscreened electric force is balanced by an inertial force; oscillations (Langmuir type) can develop and, in some cases, a huge potential drop appears, which can be large enough for renewed electron-positron pair creation, depending on situations, e.g., imposed current density, field geometry, emission from the neutron star. The nature of completely charge separated flows depends on field line curvature. The charge density of the flow varies according to the equation of continuity, $`\rho _e=enB/v`$, and thereby once the flow is relativistic, the charge density varies in proportion to the magnetic field strength: $`\rho _eB`$. On the other hand, the GJ density changes as $`\rho _{\mathrm{GJ}}B_\mathrm{z}`$. Hence, even if the GJ density is realized somewhere in the flow, the charge density deviates from the GJ value and $`E_{}`$ should appear. On field lines curving toward the rotation axis, the ratio, $`\rho _e/\rho _{\mathrm{GJ}}=B/B_\mathrm{z}`$, decreases outwardly so that the charge tends to be depleted on these field lines. On field lines curving away from the rotation axis, the charge tends to exceed the GJ value. In some regions, when the flow is non-relativistic, the GJ density appears for a long distance by adjusting the velocity so as to screen the field-aligned electric field. Such a flow in general has spatial oscillation of the Langmuir type. ## 3. Outer Gap ### 3.1. Quasi-Static Consideration The original idea of the outer gap is as follows (Holloway 1973). Suppose some particles are ripped off through the light cylinder due to the centrifugal force, wave pressure or whatever. Particles in the inner magnetosphere become insufficient to provide the GJ charge density. The density perturbation grows and the charge separation by the central dynamo is so strong that the region around the null surface where $`\rho _{\mathrm{GJ}}=0`$ is left vacant with unscreened $`E_{}`$. Refilling particles back through the l-c is unlikely because it opposes the emf. More likely, additional charges are provided by an electron-positron pair creation cascade in the gap, perhaps initiated by a cosmic ray in the gap. The created pairs are immediately charge-separated so as to refill the charge depleted regions. The gap will then be reduced. Furthermore, electrons going back to the star give negative charge to the star, and as a result the vacant space above the polar dome can even be refilled. After all, the corotation region expands and occupies the space within the light cylinder as far as possible. The expansion of the corotation region may induce further loss of particles. More importantly, the refilling action produces an electric current system. It is outward across the outer gap and inward in higher latitudes, see Figure 1 (right). This current is completely consistent with that required by the wind. Thus it is indicated that the pair creation to refill the gap cooperates with the wind: the outer gap accelerator will operate in the loop current steadily. A bonus is for the wind to get the quasi-neutral plasmas via pair creation. ### 3.2. A Steady Outer Gap Model with Pair Creation Hirotani & Shibata (1999a, 1999b, 1999c) calculated the electric field in a steady one-dimensional outer gap self-consistently with a gamma-ray distribution function and flows of electrons and positrons created by photon-photon collision. The gap is partially refilled, but still has a field-aligned electric field. The gap model is somewhat similar to a semiconductor in the sense that the concept of holes and real charged particles is convenient. There are effectively positive holes to the left of the null surface, and negative holes to the right of the null surface in Fig. 1. The charge density of the holes is minus the GJ value, $`\rho _{\mathrm{GJ}}`$. For example, on the left, if the positive holes are filled with electrons, then the real negative charge equals the GJ value and the field-aligned electric field can be screened out $`\rho _e\rho _{\mathrm{GJ}}=0`$. In the gap, two regions with oppositely charged holes are facing each other, and a field-aligned electric field appears. Particles are accelerated in the gap and emit gamma-rays, which collide with soft photons to make pairs. The pairs are immediately separated in opposite directions. These pairs produce space charge in the gap so as to refill the gap in part. The field aligned electric field is weakened. These processes are described by the Poisson equation, equations of motion, equations for the gamma-ray distribution function with source and sink terms, and equations of continuity for electrons and positrons again with a source term. HS solved these equations for a steady outer gap. ### 3.3. A Steady Outer Gap Model with external current sources Conventional outer gap models assume that all the current carrying particles are created in the gap. However, external current-carrying particles may come into the gap: current-carrying electrons may come in from the wind, or positrons may be emitted from the stellar surface. This external current supplies an additional space charge as if a back ground charge is added to the GJ charge density. As a result, the position of the outer gap moves inward or outward. In principle, the external-current-dominated outer gap can appear near the star. In this sense there is no obvious distinction between the outer gap and polar gap. The difference is just one parameter that is the ratio of the external current-carrying particles to the current-carrying particles produced in the gap. In contrast to the outer gap, conventional polar cap models assume a completely external current source. ## 4. Models of the inner magnetosphere Let us summarize the various reactions of the inner magnetosphere to the imposed current. The inner magnetosphere through which electric current is forced to run can be classified according to the field line curvature and the direction of the current. The direction and intensity of the current on each field line are determined globally, and are more like quantities imposed by the wind at least for young pulsars. The outer gap is located on field lines with away curvature and an outgoing current. As has been described in detail, a steady, pair-creating outer gap is possible. If the current running through the gap is all carried by electrons and positrons created in the gap, the gap is located at the null surface. If some part of current is carried by externally supplied particles, then the gap shifts its position, but the electrodynamics itself is the same as that which applies around the null surface. As for the polar cap accelerators, on field lines curving ‘toward’ the rotation axis, the Arons type model is known (Scharlemann, Arons, & Fawley 1978; Arons, & Scharlemann 1979; Shibata 1997), where an external super GJ current is assumed to be a critical value. The region near the star is over-filled with negative charge, while the down stream region becomes charge-depleted due to field line curvature ($`B/B_\mathrm{z}`$ decreases). Therefore for these two regions, one needs positive charge and the other needs negative charge in order for the real charge density to adjust to the respective GJ values. Thus the electrodynamics of this accelerator is as same as the outer gap. The difference is that the polar gap current is externally supplied. The gap can make pairs, and some of the pairs contribute to the current so that the polar gap model can be modified with larger current densities than the critical value. In this case, one has a flux of backward positrons. However, if the current density required by the wind is smaller than a certain value, the flow is an oscillatory GJ flow without any acceleration (Shibata 1997). Acceleration takes place also on field lines curved away from the rotation axis. In this case, again the electrons are assumed to flow out. Even if the flow is initially an oscillatory GJ flow near the star, as the flow goes up, the space automatically becomes over-filled at some distance because of the field line curvature ($`B/B_\mathrm{z}`$ increases). This catastrophic break up of an $`𝑬𝑩=0`$ condition may be terminated by forming a positive charge region downstream. This is possible by polarization of a pair plasma. Electrons are accelerated to emit gamma-rays and subsequently to make pairs. Pair positrons are decelerated, and as a result positive space charge appears. However, pair creation rate for this screening to operate is much higher than the value expected by the normal magnetic pair creation: the simple idea that once the pair density exceeds the GJ density, the field-aligned electric field is immediately screened out is not correct (Shibata, Miyazaki, & Takahara 1998). The reaction of the away curved field lines to the imposed current has yet to be clarified. ## 5. Future As seen above, the inner magnetosphere reacts to the imposed current in various ways depending on current direction, current density, field geometry, external particle flux, and so on. The accelerators can appear at various altitudes, so there is no clear discrimination between the outer gap and the polar gap. This might be confirmed observationally by the identification of various new pulse components. We need more accurate local models of the field-aligned accelerators, which should predict the properties of the high energy emission. The local models should have adjustable free parameters, such as current density and external particle flux, and should be self-consistent in the sense that the electric field, the pair production process, the photon distribution and motion of the particles are all solved together. Such sophisticated models can only interpret detailed observations giving phase (component) resolved energy spectra. ## References Arons, J., Scharlemann, E.T., 1979, ApJ231, 854 Beskin V. S., Gurevich A. V., & Istomin Ya. N., 1993, “Physics of the Pulsar Magnetosphere”, Cambridge University Press, page 163 Henriksen R. N., & Norton J. A., 1975, ApJ, 201, 719 Hirotani K., Shibata S., 1999a, MNRAS308, 54 Hirotani K., Shibata S., 1999b, MNRAS308, 67 Hirotani K., Shibata S., 1999c, PASJ51, 683 Holloway, N.J., 1973, Nat Phys. Sci., 246, 6 Mestel L., Panagi P., Shibata S., 1999, MNRAS309, 388 Scharlemann, E.T., Arons, J., Fawley, W.M., 1978, ApJ222, 297 Shibata S., & Hirotani K., 1999, in preparation Shibata, S., 1997, MNRAS287, 262 Shibata, S., Miyazaki, J., Takahara, F., 1998, MNRAS295, L53
no-problem/9912/cond-mat9912220.html
ar5iv
text
# Comment on ‘Ising Pyrochlore Magnets: Low Temperature Properties, “Ice Rules,” and Beyond’ by R. Siddharthan et al. \[ \] Siddharthan et al. discuss the competition between dipolar coupling and superexchange in pyrochlore magnets with $`111`$ Ising-like anisotropy, such as Ho<sub>2</sub>Ti<sub>2</sub>O<sub>7</sub> and Dy<sub>2</sub>Ti<sub>2</sub>O<sub>7</sub> . In the simplest approximation for Ho<sub>2</sub>Ti<sub>2</sub>O<sub>7</sub>, with Ising spins and nearest-neighbor ferromagnetic exchange, one obtains the “spin ice” model that predicts a macroscopically degenerate ground state . Consistent with this, $`\mu `$SR and neutron scattering data find no phase transition in Ho<sub>2</sub>Ti<sub>2</sub>O<sub>7</sub> in zero field down to at least 50 mK, and at that temperature the neutron scattering pattern is consistent with the spin ice ground state . On the other hand, the dipolar coupling, $`J_D`$, in these materials, is of the same order of magnitude as superexchange, $`J_S`$. Siddharthan et al. derive an estimate for $`J_S=1.92`$ K for the near neighbour exchange and $`J_D=2.35`$ K. These values are consistent with our own estimates $`J_D=2.35`$K, and $`J_S=1.4\pm 0.5`$ K from single crystal measurements where demagnetizing effects were explicitly accounted for, and which correspond to the Curie-Weiss temperature of +1.9 K, quoted in Ref. . Using their estimated value of $`J_S`$, Siddharthan et al. simulated a model in which the dipolar summation is truncated at five nearest neighbours, and find a transition to a partially ordered phase marked by a sharp peak in the heat capacity, $`C(T)`$, at $`T0.8`$. They further argue that such a picture applies to Ho<sub>2</sub>Ti<sub>2</sub>O<sub>7</sub> by commenting that the experimental heat capacity shows, as their simulation does, a steep increase at about 1 K and that the experimental data for Ho<sub>2</sub>Ti<sub>2</sub>O<sub>7</sub> suggest a vanishing entropy for the ground state (on integrating $`C(T)/T`$). Their conclusion of a phase transition in Ho<sub>2</sub>Ti<sub>2</sub>O<sub>7</sub> is inconsistent with the experimental $`\mu `$SR and neutron data of Refs . In view of the very large hyperfine coupling in Ho, as observed in many other Ho-containing compounds such as HoF<sub>3</sub> , it is necessary to include a component from the nuclear spin, $`I`$. Blöte et al. find a heat capacity in the pyrochlore Ho<sub>2</sub>GaSbO<sub>7</sub> very similar to the one of Ho<sub>2</sub>Ti<sub>2</sub>O<sub>7</sub>, which they model below 2 K as a simple Schottky anomaly with the theoretical maximum for Ho ($`I=7/2`$) of $`0.9`$ R at a temperature of about 0.3 K. In this Comment we argue that Ho<sub>2</sub>Ti<sub>2</sub>O<sub>7</sub> does in fact exhibit spin ice behavior, and that the experimental specific heat data can be accounted for in terms of a dipolar spin ice model and by including an expected contibution from the nuclear spins to the appropriate long-range treatment of the dipole-dipole interactions. Dipole-dipole interactions are conditionally convergent due to their $`1/r^3`$ nature and their lattice summation must be considered with care . In order to include the long range nature of the dipole-dipole interaction, we have used the standard Ewald method in our Monte Carlo simulations with either of the above set of $`\{J_S,J_D\}`$ values. We found it sufficient to simulate $`4\times 4\times 4`$ cubic cells (1024 spins) with $`10^6`$ Monte Carlo steps per spin. Figure 1 shows the electronic spin part heat capacity for $`J_s=1`$ K and $`J_D=2.35`$ K (open squares). The total entropy is within 2% of $`R\{\mathrm{ln}(2)(1/2)\mathrm{ln}(3/2)\}`$ and therefore agrees with that of spin ice . We have found in our Ewald simulations that dipolar spin ice occurs for all values $`J_S/J_D>0.91`$, which includes the value of Ref. , and that fully developped $`Q=0`$ long-range antiferromagnetic order occurs for $`J_S/J_D<0.91`$, with no partially frozen or ordered state as found in Ref. . The open circles show the sum of this magnetic heat capacity with a nuclear Schottky anomaly as found in Ho<sub>2</sub>GaSbO<sub>7</sub> . We find that these results reproduce reasonably well the experimental data presented in Ref. (filled circles), and is definitely more accurate than the model proposed by Siddharthan et al. Equivalently, subtracting the nuclear contribution from the magnetic heat capacity measured in Ref. leads to a heat capacity similar to that found in the spin ice material Dy<sub>2</sub>Ti<sub>2</sub>O<sub>7</sub> . In conclusion, we argue that the experimental specific heat of Ho<sub>2</sub>Ti<sub>2</sub>O<sub>7</sub> can be accounted for by the sum of a dipolar spin ice magnetic contribution and a nuclear hyperfine Shottky anomaly with no need to invoke a transition to a partially ordered state.
no-problem/9912/hep-ph9912280.html
ar5iv
text
# Untitled Document IUHET–416 WHITHER HADRON SUPERSYMMETRY? D. B. Lichtenberg Physics Department Indiana University Bloomington, IN 47405, USA (Invited talk to be given at Orbis Scientiae, Fort Lauderdale, December 16–19, 1999. A revised version will be published in the proceedings.) Abstract. A dynamically broken hadron supersymmetry appears to exist as a consequence of QCD. The reasons for the supersymmetry appear most transparently in the framework of the constituent quark model with a diquark approximation to two quarks. Applications of the supersymmetry have led to relations between meson and baryon masses and to predictions that certain kinds of exotic hadrons should not be observed. I summarize the successful applications and discuss possible future directions for this research. INTRODUCTION Physicists have applied the concept of supersymmetry to a number of different areas. To particle physicists, the most familiar supersymmetry is a spontaneously broken supersymmetry between particles and sparticles, for which at present no experimental evidence exists. However, there is experimental evidence for dynamically broken supersymmetries in the areas of atomic physics, nuclear physics, and hadron physics. As far as I know, the oldest of these applications of supersymmetry was to hadron physics, discussed first by Miyazawa in 1966. Almost a decade later, Catto and Gürsey made plausible that dynamically broken hadron supersymmetry is a consequence of QCD. They also showed that one consequence of the supersymmetry is that Regge trajectories of mesons and baryons have approximately the same slope. The reason for hadron supersymmetry is most transparent in the approximation to QCD known as the constituent quark model. In this model, the reason for hadron supersymmetry can be seen as follows: According to QCD an antiquark belongs to a $`\overline{\mathrm{𝟑}}`$ multiplet of color SU(3). A two-quark system, which I call a diquark, can be in either a 6 or $`\overline{\mathrm{𝟑}}`$ multiplet. Any two constituent quarks in a baryon must belong to the $`\overline{\mathrm{𝟑}}`$ so that the baryon can be an overall color singlet. Now a meson contains a constituent quark and a constituent antiquark. If we replace the antiquark (a fermion) by a $`\overline{\mathrm{𝟑}}`$ diquark (a boson), we make a supersymmetric transformation of a meson into a baryon. This transformation does not change the color configuration. Because, in first approximation, the QCD interaction depends only on the color configuration, the force between the quark and diquark in a baryon should be approximately the same as the force between quark and antiquark in a meson. Hence, we should be able to use supersymmetry to relate the properties of baryons to the properties of mesons. If we replace antiquarks by diquarks in normal hadrons, we can obtain exotic hadrons. For example, if we replace $`Q\overline{Q}`$ (a bar on the symbol for a particle denotes the antiparticle) by $`D\overline{D}`$, where $`Q`$ is a quark and $`D`$ is a diquark, we obtain an exotic meson from a normal one. Making use of supersymmetry, we can relate properties of exotic hadrons to similar properties of normal ones. In exotic hadrons, a diquark can be either in a $`\overline{\mathrm{𝟑}}`$ or a 6 multiplet of color. The interactions of the 6 cannot be related by supersymmetry to the interactions of an antiquark, and so we must neglect the 6. We justify this neglect as follows: When two quarks are close together, QCD says that their Coulomb-like interaction is attractive in a $`\overline{\mathrm{𝟑}}`$ and repulsive in a 6. It is then plausible that the $`\overline{\mathrm{𝟑}}`$ lies lower in energy than the 6. If we confine ourselves to low-mass exotics we hope that we may safely neglect the contribution of color-6 diquarks. The difficulty with applying supersymmetry to hadrons is that the supersymmetry is badly broken, or the pion and proton would have the same mass. Miyazawa was already aware of this difficulty in 1966. Supersymmetry breaking arises from at least three differences between a diquark and a quark (or antiquark): 1) they have different sizes; 2) they have different masses; and 3) they have different spins. We briefly discuss these differences. 1) Obviously, a diquark is not a point particle, but neither is a constituent quark, as it consists of a pointlike quark surrounded by a cloud of gluons and quark antiquark pairs. I have not seen any paper discussing how supersymmetry is broken by size differences between quark and diquark, and I have made no progress on this problem myself, so I have to neglect the effects of diquark size. 2) Mass effects can be taken into account in several ways. One particularly simple method is to relate mass differences between mesons to mass differences between baryons in such a way that the effects of the diquark-quark mass difference is most likely to cancel out. Another method is to make use of the fact that the quark-antiquark binding energy in mesons depends smoothly on the constituent quark masses . In this method, the binding energy of a quark with a diquark can be estimated by treating the diquark as a fictitious antiquark with the diquark mass. 3) There are spin-dependent forces in QCD. One way to minimize their effect is to take appropriate averages over spin. Another way is to assume that the spin-dependent interaction energy between two quarks in a diquark is independent of the hadrons in which the diquark is embedded. This assumption is not strictly correct , but it is a good approximation. Then the spin-dependent contribution to the interaction energy can be approximately extracted from the experimentally known masses of baryons. In both spin averaging and extracting spin-dependent forces from baryons, it is assumed that the spin-dependent force in ground-state hadrons is the usual chromomagnetic force arising from one-gluon exchange . This assumption has been challenged by Glozman and Riska , and I have discussed the arguments in favor of one-gluon exchange in a talk at the last Orbis meeting . I have been working on the consequences of broken hadron supersymmetry for several years and have spoken about it at two previous Orbis meetings . In the present talk I shall update the conclusions of my two earlier talks and discuss possible directions for future work in hadron supersymmetry. RELATIONS BETWEEN MESON AND BARYON MASSES From here on, I will sometimes call an antiquark a quark and an antidiquark a diquark. In this language, for example, a meson is a two-quark state and an exotic meson is a four-quark or two-diquark state. We use the notation that $`Q`$ denotes any quark, $`q`$ denotes a light $`u`$ or $`d`$ quark, and $`D`$ ($`QQ`$) denotes a color $`\overline{\mathrm{𝟑}}`$ diquark. Also $`M`$ ($`Q\overline{Q}`$) is a normal meson, $`M_E`$ ($`QQ\overline{Q}\overline{Q}`$ or $`D\overline{D}`$) is an exotic meson, $`B`$ ($`QQQ`$ or $`QD`$) is a normal baryon, $`B_E`$ ($`QQQQ\overline{Q}`$ or $`DD\overline{Q}`$) is an exotic baryon, and $`B_2`$ ($`QQQQQQ`$ or $`DDD`$) a dibaryon. As a consequence of hadron supersymmetry, we can make the transformations $$\overline{Q}D,Q\overline{D}.$$ $`(1)`$ Applying either the first or second of eqs. (1) one or more times, we obtain $$M=Q\overline{Q}B=QD,$$ $`(2)`$ $$B=QDM_E=\overline{D}D,$$ $`(3)`$ $$\overline{B}=\overline{Q}\overline{Q}\overline{Q}B_E=DD\overline{Q},$$ $`(4)`$ $$\overline{B}=\overline{Q}\overline{Q}\overline{Q}B_2=DDD.$$ $`(5)`$ We next consider how to take into account supersymmetry breaking. One way to minimize the effects of spin-dependent forces is to average over spins in such a way that perturbatively the spin-dependent forces cancel out. In order to do this, we must make an assumption about the nature of these spin-dependent forces. Following De Rújula et al. , we assume that the spin-dependent forces arise from one-gluon exchange. Then the spin averaging of ground-state hadrons is given by the prescription of Anselmino et al. . One way to minimize the effects of mass differences between quarks and diquarks is to let one quark in the diquark be a light quark $`q`$. We do this by confining ourselves (in this section) to the transformations $$\overline{Q}D_q=Qq,Q\overline{D}_q=\overline{Q}\overline{q}.$$ $`(6)`$ We also take differences in masses such that the effect of the extra light quark in the diquark will tend to cancel out. In the following, we let the symbol for a hadron denote its mass, and we write the constituent quarks of a hadron in parentheses following the hadron symbol. We are led by the considerations of the previous paragraph to consider the difference of two meson masses: $`M(\overline{Q}_2q)M(\overline{Q}_1q)`$. Applying the transformation of eq. (6), we get $$M(\overline{Q}_2q)M(\overline{Q}_1q)=B(Q_2qq)B(Q_1qq).$$ $`(7)`$ The masses in eq. (7) are to be thought of as spin averages, i.e. $$M(\overline{q}q)=(3\rho +\pi )/4,M(\overline{s}q)=(3K^{}+K)/4,$$ $$M(c\overline{q})=(3D^{}+D)/4,M(\overline{b}q)=(3B^{}+B)/4,$$ $`(8)`$ $$B(qqq)=(\mathrm{\Delta }+N)/2,B(sqq)=(2\mathrm{\Sigma }^{}+\mathrm{\Sigma }+\mathrm{\Lambda })/4,$$ $$B(cqq)=(2\mathrm{\Sigma }_c^{}+\mathrm{\Sigma }_c+\mathrm{\Lambda }_c)/4,B(bqq)=(2\mathrm{\Sigma }_b^{}+\mathrm{\Sigma }_b+\mathrm{\Lambda }_b)/4,$$ $`(9)`$ where the symbols for the mesons and baryons are those of the Particle Data Group . Using eqs. (8) and (9) in (7), we obtain the sum rules $$(3K^{}+K)/4(3\rho +\pi )/4=(2\mathrm{\Sigma }^{}+\mathrm{\Sigma }+\mathrm{\Lambda })/4(N+\mathrm{\Delta })/2,$$ $`(10)`$ $$(3D^{}+D)/4(3K^{}+K)/4=(2\mathrm{\Sigma }_c^{}+\mathrm{\Sigma }_c+\mathrm{\Lambda }_c)/4(2\mathrm{\Sigma }^{}+\mathrm{\Sigma }+\mathrm{\Lambda })/4,$$ $`(11)`$ $$(3B^{}+B)/4(3D^{}+D)/4=(2\mathrm{\Sigma }_b^{}+\mathrm{\Sigma }_b+\mathrm{\Lambda }_b)/4(2\mathrm{\Sigma }_c^{}+\mathrm{\Sigma }_c+\mathrm{\Lambda }_c)/4.$$ $`(12)`$ These same sum rules were obtained earlier by a method not using hadron supersymmetry. However, the assumption of one-gluon exchange was needed for averaging over spin states. We can test the sum rules with the experimental values of the known hadron masses . The left-hand side of eq. (10) is $`182\pm 1`$ MeV, while the right-hand side is $`184\pm 1`$ MeV, in good agreement with experiment. Similarly, the left-hand side of eq. (11) is $`1179\pm 1`$ MeV, while the right-hand side is $`1174\pm 1`$ MeV, also in satisfactory agreement with the data. In a 1996 talk , I noted that eq. (12) was consistent with preliminary data on baryons containing $`b`$ quarks, but the 1998 tables of the Particle Data Group do not confirm those data. Therefore, the sum rule of eq. (12) remains to be tested by experiment. The fact that the sum rules of eqs. (10) and (11) agree with the data constitutes evidence in support of spin-dependent forces arising from one-gluon exchange. These sum rules do not follow from the spin-dependent forces postulated by Glozman and Riska . In their work, the spin-dependent forces in baryons containing only light quarks arise from pseudoscalar meson exchanges. However, I don’t see how the same mechanism can apply to mesons or to baryons containing heavy quarks. If I would need two or three different mechanisms to account for the spin-dependent forces in hadrons (or a linear combination of them), then I would not know how to obtain sum rules. EXOTIC HADRONS We do not need to restrict ourselves to spin-averaged hadron masses or to diquarks containing at least one light quark, as we can explicitly take into account mass and spin effects. I discussed this problem at a previous Orbis , and so will only briefly review the method. We start with the spin-averaged hadron masses, but include spin effects explicitly at a later stage. We assign constituent masses to the quarks such that the binding energy of a quark and antiquark in a meson is a smooth function of the reduced mass of the two constituents . We can use this “meson curve” to read off the binding energy of a fictious hadron made of a fictious quark and antiquark of any given masses. We consider a spin-averaged baryon made of a quark and a diquark, treating the diquark as a fictious antiquark. Our first guess for the diquark mass is that it equals the sum of its two constituent quark masses. We obtain the reduced mass of the quark and diquark and read off the binding energy from the meson curve. We add this binding energy to the masses of the quark and diquark to obtain a calculated spin-averaged baryon mass. In general, this mass does not equal the experimental mass of a baryon, averaged over spin. However, by repeatedly adjusting the mass of the diquark, we can obtain the correct spin-averaged baryon mass. We are thus able to obtain the spin-averaged diquark masses for constituent quarks of any flavors. Next we obtain diquark properties from observed baryon masses rather than from spin-averaged masses. We extract the spin-dependent interaction energies of two quarks in a diquark from the observed baryon masses . Adding these terms to the spin-averaged diquark mass, we obtain the masses of spin-one and spin-zero diquarks. We are now ready to calculate the masses of ground-state exotic hadrons. We first consider exotic mesons containing at least one diquark of spin zero. In such mesons, there are no spin-dependent forces between the diquarks. Therefore, we only have to calculate the reduced mass of the constituents and add the binding energy from the meson curve to the diquark masses in order to obtain the exotic meson mass. (If both diquarks have spin one, there are additional spin-dependent forces, but their effects can be calculated.) The results of these calculations is that diquark-antidiquark exotic mesons have sufficiently large masses to decay rapidly into two normal mesons. Because we expect production cross sections to be small and decay widths large, it is unlikely that such exotic mesons will be observed. A possible exception is that an exotic meson containing a $`bb`$ diquark might be stable against strong decay, but its production cross section will be extremely small. Our conclusion is in agreement with the fact that no exotic mesons composed of a diquark and antidiquark have yet been seen. The same method can be applied to exotic baryons and to dibaryons. However, there is the complication that, except in the limit of point-like diquarks, the Pauli principle is not strictly satisfied for quarks in different diquarks. The results are similar to the results for mesons: exotic baryons and dibaryons (other than the deuteron) are not likely to be observed. Again, this conclusion is in agreement with observations to date. THE FUTURE The predictions of the previous sections follow from broken hadron supersymmetry plus spin-dependent forces arising from one-gluon exchange. It is gratifying that we have not obtained any predictions in serious disagreement with experiments done so far, but it is disappointing that our model says that diquark exotics will probably not be observed. Although enough has been established so far to give me confidence that hadron supersymmetry is a useful concept, open questions remain to be answered. Among them are: (1) A diquark may be almost as large as the hadron that contains it. How do we correct for the non-negligible size of a diquark? (2) Is there any way to take into account the contribution from color-sextet diquarks to exotic hadrons? (3) If the spin-dependent forces in some hadrons are not given by one-gluon exchange but rather by the mechanism of Glozman and Riska, how do the results change? Are the changes large enough to destroy the good agreement with experiment? (4) How can we take the Pauli principle into account in exotic baryons and dibaryons? (5) Exotic hadrons containing diquarks can mix with other hadrons having the same quantum numbers. For example, quantum numbers permitting, a diquark-antidiquark meson can mix with normal mesons, hybrids, and glueballs. Can we take this mixing into account? (6) Are there any other useful predictions to be obtained from broken hadron supersymmetry? In conclusion, if physicists can successfully tackle the preceding open questions, hadron supersymmetry will rest on a much sounder foundation than it does now. However, if answers are not forthcoming, it may be time for physicists to store in their minds that broken hadron supersymmetry exists and go on to other topics. ACKNOWLEDGMENTS Some of this work was done with Enrico Predazzi and Renato Roncaglia. I should like to thank Florea Stancu for helpful discussions. REFERENCES 1. H. Miyazawa, Baryon number changing currents, Prog. Theor. Phys. 36:1266 (1966). 2. S. Catto and F. Gürsey, Algebraic treatment of effective supersymmetry, Nuovo Cimento 86:201 (1985). 3. R. Roncaglia, A.R. Dzierba, D.B. Lichtenberg, and E. Predazzi, Predicting the masses of heavy hadrons without an explicit Hamilatonian, Phys. Rev. D 51:1248 (1995). 4. M. Anselmino, D.B. Lichtenberg, and E. Predazzi, Quark color-hyperfine interactions in baryons, Z. Phys. C 48:605 (1990). 5. A. De Rújula, H. Georgi, and S. L. Glashow, Hadron masses in a gauge theory, Phys. Rev. D 12:147 (1975). 6. L.Ya Glozman and D.O. Riska, The spectrum of the nucleons and the strange hyperons and chiral dynamics, Phys. Rep. 268:263 (1996). 7. D. B. Lichtenberg, Spin-dependent forces between quarks in hadrons, Orbis Scientiae, Fort Lauderdale (Dec. 18–21, 1998), B.N. Kursunoglu et al., eds. Plenum Press, New York (1999). 8. D. B. Lichtenberg, Hadron Supersymmetry and Relations Between Meson and Baryon Masses, Orbis Scientiae, Miami Beach (Jan. 25–28, 1996), in Neutrino Mass, Dark Matter, Gravitational Waves, Monopole Condensation and Light Cone Quantization, B. Kursunoglu, S.L. Mintz, and A. Perlmutter, eds., Plenum Press, New York (1996), pp. 319–322. 9. D.B. Lichtenberg, Exotic Hadrons, Orbis Scientiae, Miami Beach (Jan. 23–26, 1997), in High Energy Physics and Cosmology: Celebrating the Impact of 25 Gables Conferences B.N. Kursunoglu, S.L. Mintz, and A. Perlmutter, eds. Plenum Press, New York (1997), pp. 59–65. 10. Particle Data Group: C. Caso et al., Review of Particle Physics, Euro. Phys. J. C 3:1 (1998). 11. D.B. Lichtenberg and R. Roncaglia, New formulas relating the masses of some baryons and mesons, Phys. Lett. B 358:106 (1995).
no-problem/9912/cond-mat9912397.html
ar5iv
text
# Two-Dimensional Wigner Crystal in Anisotropic Semiconductor \[ ## Abstract We investigate the effect of mass anisotropy on the Wigner crystallization transition in a two-dimensional (2D) electron gas. The static and dynamical properties of a 2D Wigner crystal have been calculated for arbitrary 2D Bravais lattices in the presence of anisotropic mass, as may be obtainable in Si MOSFETs with (110) surface. By studying the stability of all possible lattices, we find significant change in the crystal structure and melting density of the electron lattice with the lowest ground state energy. \] One of the most unexpected discoveries in the 2D electron systems in semiconductor structures is the possibility of a metal-insulator transition (MIT) first suggested by experiments of Kravchenko et al. in high mobility Si metal-oxide-semiconductor field-effect transistors (MOSFETs) in zero magnetic field. Later, a number of groups have reported possible MITs in 2D electron or hole systems in several different semiconductor heterostructures, as well as in MOSFETs. However, a genuine MIT in two dimensions is in contradiction to the scaling theory of localization, which predicts that in the absence of electron-electron (or hole-hole) interactions no true metallic behavior is possible in two dimensions with pure potential scattering. In fact, no fundamental principle requires that such a scaling argument holds in the presence of strong interactions, which can be measured by the ratio of the Coulomb energy to Fermi energy, expressed by the dimensionless parameter $`r_s=m^{}e^2/\mathrm{}^2ϵ\sqrt{n\pi }`$, and there have been a number of suggestions that this is the case. Theoretical models not involving MIT have also been proposed. Of all the experimental systems, the critical $`r_s`$ varies from 5 in Si<sub>0.88</sub>Ge<sub>0.12</sub> up to 35 in GaAs/AlGaAs, which clearly suggests that the Coulomb interactions are certainly not negligible, but in fact may be the dominating scale. It is well known that 2D electrons crystallize into a triangular lattice (Wigner crystalization) in the low density limit where electron-electron interactions dominate. In an ideally clean 2D system, the critical $`r_s`$ is predicted by Tanatar and Ceperley to be $`37\pm 5`$ from quantum Monte Carlo simulations. Chui and Tanatar further found that the effect of impurities can lower the critical $`r_s`$ at which solidification occurs (at least on short length scale) to as low as $`7.5`$. This is in good agreement with the range of the critical $`r_s`$ observed in the recent experiments. In this paper, we have carried out a further test on the relevance of the Wigner crystallization phenomenon to the putative 2D MITs by exploring the effect of mass anisotropy on Wigner crystallization, since mass anisotropy can be presented in Si- or Ge-based semiconductor devices. In particular, we have studied the ground state energy of arbitrary 2D Bravais lattice with anisotropic mass. To the best of our knowledge, no result on such properties of 2D Wigner crystal with anisotropic mass has been published so far. The ground state energy of a lattice in the low density limit, where exchange process are negligible, can be written as the sum of static Coulomb energy and vibrational zero-point energy. With the aid of Ewald’s transformation, we can write the static ground state energy per electron of a Bravais lattice as a sum over its lattice sites $`𝐫^0`$: $`E_s`$ $`=`$ $`{\displaystyle \frac{1}{N}}{\displaystyle \underset{i<j}{}}{\displaystyle \frac{e^2}{|𝐫_i^0𝐫_j^0|}}`$ (1) $`=`$ $`{\displaystyle \frac{2e^2}{\sqrt{v}}}\left[2{\displaystyle \underset{𝐫^00}{}}\varphi _{1/2}\left({\displaystyle \frac{\pi }{v}}|𝐫^0|^2\right)\right],`$ (2) where $`\varphi _n(x)`$ is the Misra function $$\varphi _n(x)=_0^{\mathrm{}}𝑑tt^ne^{xt},$$ (3) and $`v`$ is the area of the primitive unit cell of the direct lattice. We assume the existence of a neutralizing background of positive charges to avoid the divergence in the evaluation of the static energy, which is independent of the mass anisotropy. In a harmonic approximation, the spectrum of lattice vibrations is determined by $``$ $`=`$ $`{\displaystyle \underset{i}{}}{\displaystyle \underset{\alpha }{}}{\displaystyle \frac{p_\alpha (𝐫_i^0)^2}{2m_\alpha }}`$ (5) $`+{\displaystyle \frac{1}{2}}{\displaystyle \underset{i,j}{}}{\displaystyle \underset{\alpha ,\beta }{}}u_\alpha (𝐫_i^0)\mathrm{\Phi }_{\alpha \beta }(𝐫_i^0,𝐫_j^0)u_\beta (𝐫_j^0),`$ where $`𝐮(𝐫^0)=𝐫𝐫^0`$ is the deviation from equilibrium of the electron whose equilibrium site is $`𝐫^0`$. The atomic force constants $`\mathrm{\Phi }_{\alpha ,\beta }(𝐫_i^0,𝐫_j^0)`$ are defined by $$\mathrm{\Phi }_{\alpha \beta }(𝐫_i^0,𝐫_j^0)=\{\begin{array}{cc}\frac{^2v(r)}{r_\alpha r_\beta }|_{𝐫=𝐫_i^0𝐫_j^0},\hfill & ij\hfill \\ _{ki}\frac{^2v(r)}{r_\alpha r_\beta }|_{𝐫=𝐫_i^0𝐫_k^0},\hfill & i=j\hfill \end{array}$$ (6) with $`v(r)=e^2/r`$. The normal mode frequencies $`\omega _\lambda (𝐪)`$ are solutions of the 2D eigenvalue problem: $$𝐌\omega _\lambda ^2(𝐪)ϵ_\lambda =𝐃(𝐪)ϵ_\lambda ,$$ (7) where the dynamical matrix $`𝐃(𝐪)`$ is given by $$𝐃(𝐪)=\underset{𝐑^0}{}𝚽(𝐑^0)e^{i𝐪𝐑^0}.$$ (8) In anisotropic systems, $`𝐌`$ is a $`2\times 2`$ matrix. The dynamical ground state energy can be obtained by integrating zero-point energies of all modes within the Brillouin zone. Bonsall and Maradudin did a comprehensive study on the ground state energies of five different crystal structures, including triangular lattice and square lattice, for isotropic 2D electrons. They found that the triangular lattice has the lowest energy in low density limit, $$E_0=\frac{2.21}{r_s}+\frac{1.63}{r_s^{3/2}},$$ (9) in units of $`Ryd=m^{}e^4/\mathrm{}^2`$. In Eq. 9, the first term comes from the Coulomb interactions between electrons sitting on the lattice sites, while the second term is from the harmonic oscillations of electrons around their equilibrium positions. They also pointed out that the transverse branch of the dispersion relation is pure imaginary for certain directions in the square lattice, implying a dynamical instability of this lattice. In the presence of the mass anisotropy, the ground state may not necessarily be one of the most symmetric lattices. Therefore, we have developed a systematic method to explore the ground state energies of 2D Bravais lattices with different electron concentrations and mass anisotropy. First, we map an arbitrary Bravias lattice to a point on a 2D plane by the following scheme. From an arbitrary lattice site, which is chosen to be origin, one can choose an arbitrary pair of two non-collinear lattice vectors that span the Bravais lattice. If one rotates and scales the lattice so that one of the two primitive vectors lies along x-axis normalized to unit length, the end point of the other lattice vector represents the 2D lattice structure. However, a lattice can be mapped to infinite number of points due to arbitrary choice of primitive vectors. One can reduce the area of points by choosing two shortest vectors from the origin with the second shortest one placed along x-axis. Thus, different lattices are represented by points in the upper positive quadrant as shown in Fig. 1. Using reflection symmetry, however, all the points can be confined by the y-axis and the two circles shown (the shaded area) in Fig. 1. The three corners $`(0,1)`$, $`(\frac{1}{2},\frac{\sqrt{3}}{2})`$ and $`(0,0)`$ represent square lattice, triangular lattice, and quasi-one-dimensional lattice, respectively. We can then sample 2D lattices by applying a rectangular mesh on the area and calculating the ground state energy on each grid point, which can be uniquely mapped back to a 2D Bravais lattice. Applying Bonsall and Maradudin’s calculations on all sampled lattices with isotropic mass, we found that most lattices have imaginary vibrational modes, therefore are unstable. Under our mapping, only lattices around the triangular lattice are stable, which occupies roughly 10% of the reduced zone, as shown in Fig. 1. Using symmetry, one can actually show the triangular lattice is the center of all stable lattices. Within the stable area the triangular lattice has the lowest energy for $`r_s>25`$, which is slightly below the Wigner crystallization density $`r_s=37\pm 5`$ predicted in quantum Monte Carlo calculations. When $`r_s=25`$, the lattice represented by point A (0.27, 0.95) starts to have lower energy than the triangular lattice, whose ground state energy is found to be $$E_{0A}=\frac{2.206}{r_s}+\frac{1.594}{r_s^{3/2}}.$$ (10) Since point A is adjacent to the unstable area, this implies that Wigner lattice is no longer the ground state of the 2D electron system. We have studied different ratios between longitudinal mass $`m_l`$ and transverse mass $`m_t`$. In this paper, we present results of $`m_l/m_t=3`$, which is approximately the mass ratio in silicon (110) surface structures. We use the geometric mean of the two mass components as the effective mass in calculating $`r_s`$. The relative orientation of the lattice vectors with respect to the principle axes of anisotropic mass has also been taken into consideration. Figure 2 compares the dispersion relation of the anisotropic triangular lattice with that of the isotropic lattice. Mass anisotropy lifts the 2-fold degeneracy at J-point thus the longitudinal branch as well. The ground state energy for triangular lattice fo $`m_l/m_t=3`$ therefore becomes $$E_{0T}=\frac{2.212}{r_s}+\frac{1.694}{r_s^{3/2}}.$$ (11) Compared with Eq. (1), one concludes the mass anisotropy leads to an increase in the total lattice vibrational energy. Since the electrostatic ground state energy is independent of mass anisotropy, one expects that at low enough density, the triangular lattice remains as the lowest energy configuration. However, just for $`m_l/m_t=3`$ even at very low density, specially $`r_s<1000`$, we find that the lattice with the lowest ground state energy starts to shift along one of the circular curve that confines the reduced lattice area towards the quasi-one-dimensional lattice. It is understandable that a lattice can stretch along the axis of the smaller mass to reduce the longitudinal energy of lattice vibration. This trend is shown in the enlarged stable area in Fig. 3. Fig. 4 shows the plots of the ground state energy for different lattices represented by the two circular curves (A-T-B) at different $`r_s`$ for $`m_l/m_t=3.0`$. When $`r_s=100`$, the ground state energy is minimized at a point along curve TB. We expect 2D electrons crystallize in the lattice structure mapped to this point at such density. When $`r_s=80`$ the lattice with the lowest energy moves to the corner of the stable area, B, which connects with the area of unstable lattices. The comparison of the phonon dispersion curves of the triangular lattice and the lattice mapped to point B suggests that, by crystallizing in less symmetric lattice, the electrons gain larger energy in longitudinal modes than the loss in transverse modes. When $`r_s=60`$, the ground state energy decreases linearly when approaching point B from the stable area along the circular curve TB, which implies that no stable lattice can be formed at this density. The comparison of the ground state energies suggests that the Wigner crystallization density for ideally clean 2D anisotropic electrons, such as silicon (110) surface electrons, is much lower than that predicted for an isotropic system. In this calculation we have neglected quantum mechanics of the electrons (i.e. electronic exchange energy). While this may have a quantitative effect, it is unlikely to completely alter our basic conclusion that $`r_s`$ of the Wigner crystallization transition increases with mass anisotropy. This is especially true since the $`r_s`$ of interest ($`80`$) is very large, where exchange energies are smallest. To summarize, we have found the ground state of 2D electron system has strong dependence on the mass anisotropy. In a pure system, anisotropic electrons require larger spacing to form Wigner lattice so as to stablize the long wavelength transverse modes. It would be interesting to see if this dramatic reduction in Wigner crystal density can be observed in extremely clean Si (110) MOSFET, in which our calculations estimate the critical density, in terms of $`r_s`$, can be as large as 80. This research was supported by NSF DMR-9809483.
no-problem/9912/astro-ph9912554.html
ar5iv
text
# Direct Detection of WIMP Dark Matter Review Talk given at the Sixth International Workshop on Topics in Astroparticle and Underground Physics, TAUP 99 (Paris, College de France, September 6-10, 1999), to be published in Nucl. Phys. B (Proc. Suppl.). ## 1 Introduction There is substantial evidence, reviewed at length in this Workshop, that most of the matter of the universe is dark and a compelling motivation to believe that it consists mainly of non-baryonic objects. From the cosmological point of view, two big categories of non-baryonic dark matter have been proposed: cold (WIMPs, axions) and hot (light neutrinos) dark matter according to whether they were slow or fast moving at the time of galaxy formation. Without entering into considerations about how much of each component is needed to fit better the observations, nor about how large the baryonic component of the galactic halo could be, we assume that there is enough room for WIMPs in the halo to try to detect them, either directly or through their by-products. Discovering this form of dark matter is one of the big challenges in Cosmology, Astrophysics and Particle Physics. The indirect detection of WIMPs proceeds currently through two main experimental lines: either by looking in the space for positrons, antiprotons, or other antinuclei produced by the WIMPs annihilation in the halo, or by searching in large underground detectors or underwater neutrino telescopes for upward-going muons produced by the energetic neutrinos emerging as final products of the WIMPs annihilation in celestial bodies (Sun, Earth…) The direct detection of WIMPs relies in the measurement of the WIMP elastic scattering off the target nuclei of a suitable detector. Pervading the galactic halos, slow moving ($`300`$ km/s), and heavy ($`1010^3`$ GeV) WIMPs could make a nucleus recoil with a few keV ($`\mathrm{T}(1100)`$ keV), at a rate which depends of the type of WIMP and interaction. Only a fraction of the recoil QT is visible in the detector, depending on the type of detector and target and on the mechanism of energy deposition. The so-called Quenching Factor Q is essentially unit in thermal detectors whereas for the nuclei in conventional detectors range from about 0.1 to 0.6. Because of the low interaction rate \[typically $`<`$ 0.1 (0.001) c/kg day for spin independent (dependent) couplings\] and the small energy deposition, the direct search for particle dark matter through their scattering by nuclear targets requires ultralow background detectors of a very low energy threshold. Moreover, the (almost) exponentially decreasing shape of the predicted nuclear recoil spectrum mimics that of the low energy background registered by the detector. All these features together make the WIMP detection a formidable experimental challenge. Customarily, one compares the predicted event rate with the observed spectrum. If the former turns out to be larger than the measured one, the particle under consideration can be ruled out as a dark matter component. That is expressed as a contour line $`\sigma `$(m) in the plane of the WIMP-nucleus elastic scattering cross section versus the WIMP mass which excludes, for each mass m, those particles with a cross-section above the contour line $`\sigma `$(m). The level of background limits, then, the sensitivity to exclusion. This mere comparison of the expected signal with the observed background spectrum is not supposed to detect the tiny imprint left by the dark matter particle, but only to exclude or constrain it. A convincing proof of the detection of WIMPs would need to find unique signatures in the data characteristic of them, like temporal (or other) asymmetries, which cannot be faked by the background or by instrumental artifacts. The only distinctive signature investigated up to now is the predicted annual modulation of the WIMP signal rate. The detectors used so far (most of whose results have been presented to this Workshop) are Ge (IGEX, COSME, H/M) and Si (UCSB) diodes, NaI (ZARAGOZA, DAMA, UKDMC, SACLAY, ELEGANTS), Xe (DAMA, UCLA, UKDMC) and CaF<sub>2</sub> (MILAN, OSAKA, ROMA) scintillators, Al<sub>2</sub>O<sub>3</sub> (CRESST, ROSEBUD) and TeO<sub>2</sub> (MIBETA, CUORICINO) bolometers and Si (CDMS) and Ge (CDMS, EDELWEISS) thermal hybrid detectors, which also measure the ionization. But new detectors and techniques are entering the stage. Examples of such new devices, presented at TAUP are: a liquid-gas Xenon chamber (UCLA); a gas chamber sensitive to the direction of the nuclear recoil (DRIFT); a device which uses superheated droplets (SIMPLE), and a colloid of superconducting superheated grains (ORPHEUS). There is also some new projects featuring a large amount of target nuclei, both with ionization Ge detectors (GENIUS, GEDEON) and cryogenic thermal devices (CUORE). ## 2 Strategies for WIMP detection The rarity and smallness of the WIMP signal dictate the experimental strategies for its detection: Reduce first the background, controlling the radiopurity of the detector, components, shielding and environment. The best radiopurity has been obtained in the Ge experiments (IGEX, H/M, COSME). In the case of the NaI scintillators, the backgrounds are still one or two orders of magnitude worse than in Ge (ELEGANTS, UKDMC, DAMA, SACLAY). The next step is to use discrimination mechanisms to distinguish electron recoils (tracers of the background) from nuclear recoils (originated by WIMPs or neutrons). Various techniques have been applied for such purpose: a statistical pulse shape analysis (PSD) based on the different timing behaviour of both types of pulses (DAMA, UKDMC, SACLAY); an identification on an event-by-event basis of the nuclear recoils by measuring at the same time two different mechanisms of energy deposition, like the ionization and heat capitalizing the fact that for a given deposited energy (measured as phonons) the recoiling nucleus ionizes less than the electrons (CDMS, EDELWEISS). A promising discriminating technique is that used in the liquid-gas Xenon detector with ionization plus scintillation presented to this Workshop (see D. Cline’s contribution to these Proceedings). An electric field prevents recombination, the charge being drifted to create a second pulse in the addition to the primary pulse. The amplitudes of both pulses are different for nuclear recoils and gammas allowing their discrimination. Another technique is to discriminate gamma background from neutrons (and so WIMPs) using threshold detectors—like neutron dosimeters—which are blind to most of the low Linear Energy Transfer (LET) radiation (e, $`\mu `$, $`\gamma `$). A new type of detector (SIMPLE) which uses superheated droplets which vaporize into bubbles by the WIMP (or other high LET particles) energy deposition has been presented to this Workshop (see J. Collar’s contribution to these Proceedings). The other obvious strategy is to make detectors of very low energy threshold and high efficiency to see most of the signal spectrum, not just the tail. That is the case of bolometer experiments (MIBETA, CRESST, ROSEBUD, CUORICINO, CDMS, EDELWEISS). Finally, one should search for distinctive signatures of the WIMP, to prove that you are seeing a WIMP. Suggested identifying labels are: an annual modulation of the signal rate and energy spectrum (of a few percent) due to the seasonal June-December variation in the relative velocity Earth-halo and a forward-backward asymmetry in the direction of the nuclear recoil due to the Earth motion through the halo. The annual modulation signature has been already explored. Pioneering searches for WIMP annual modulation signals were carried out in Canfranc, Kamioka and Gran Sasso. At TAUP 97, the DAMA experiment at Gran Sasso, using a set of NaI scintillators reported an annual modulation effect interpreted (after a second run) as due to a WIMP of 60 GeV of mass and scalar cross-section on protons of $`\sigma _\mathrm{p}=7\times 10^6`$ picobarns. The implementation of these strategies will be illustrated by a selection of the experiments presented at TAUP 99. The characteristic features and main results of these experiments are overviewed in the following paragraphs. ## 3 Germanium Experiments with conventional detectors The high radiopurity and low background achieved in Germanium detectors, their fair low energy threshold, their reasonable Quenching Factor (25%) and other nuclear merits make Germanium a good option to search for WIMPs with detectors and techniques fully mastered. The first detectors applied to WIMP direct searches (as early as in 1987) were, in fact, Ge diodes, as by-products of $`2\beta `$-decay dedicated experiments. The exclusion plots $`\sigma `$(m) obtained by former Ge experiments \[PNNL/USC/Zaragoza (TWIN and COSME-1), UCSB, CALT/NEU/PSI, H/M\] are still remarkable and have not been surpassed till recently. There are three germanium experiments currently running for WIMP searches (COSME-2, IGEX and H/M). COSME-2 (Zaragoza/PNNL/USC) is a small (240 g) natural abundance germanium detector of low energy threshold (1.8–2 keV) and energy resolution of 400 eV at 10 keV. It has been underground for more than ten years and so is rather clean of the cosmogenic induced activity in the 8–12 keV region. It is currently taking data in Canfranc (at 2450 m.w.e.) for WIMPs and solar axion searches (see I.G. Irastorza’s contribution to these Proceedings). The background is 0.6 (0.3) c/(keV kg day) averaged between the 2–15 keV (15–30 keV) energy region. IGEX is a set of enriched Ge-76 detectors looking for $`2\beta `$ decay (see D. Gonzalez’s contribution to these Proceedings) which have been recently upgraded for WIMP searches with energy thresholds of $``$ 4 keV, and energy resolution of 2 keV (at 10 keV). Data from one of these detectors (RG2, 2.1 kg $`\times 30`$ d) show backgrounds of 0.1 c/(keV kg day) in the 10-20 keV region and 0.04 c/(keV kg day) between 20 and 40 keV. It is remarkable that below 10 keV, down to the threshold of 4 keV the background is $`0.3`$ c/(keV kg day) mainly from noise which is being removed. The spectrum is shown in Fig. 1. IGEX is also operating two other Ge detectors in Baksan \[one natural—TWIN—and other enriched in Ge-76 (RV)\] of about 1 kg each, and thresholds of 2 and 6 keV respectively. After a subtraction procedure a background of $`0.1`$ c/(keV kg day) between 10 and 30 keV was obtained. The Heidelberg/Moscow Ge experiment on $`2\beta `$ decay in Gran Sasso is also using data of one of their enriched Ge-76 detectors (2.7 kg, and threshold of 9–10 keV) in a search for WIMPs. The background is similar to that of IGEX (0.16 c/(keV kg day) from 10-15 keV and 0.05 c/(keV kg day) from 15-30 keV), although its threshold is more than a factor two higher. The low-energy spectrum is also shown comparatively to that of IGEX in Fig. 1. The exclusion plots obtained from the spectra of Germanium detectors are shown in Fig. 2. The Ge-combined limit is the contour obtained from the envelope of all of them—including the last H/M data and is compared with that derived from the last COSME, IGEX and CDMS data presented at this Workshop. Also the most stringent NaI exclusion plot is shown together with the ($`\sigma `$, m) region where a seasonal modulation effect in the recorded rate has been reported by the DAMA Collaboration and attributed to a WIMP signal. The exclusions depicted in this paper refer to spin-independent interactions. The sensitivity of the present detectors does not yet reach the rates needed to explore spin-dependent couplings. For comparison among different experiments, the coherent spin independent WIMP-nucleus cross-section is normalized to that of WIMP on nucleons. All the Ge and NaI exclusion plots shown have been recalculated from the original spectra by the author and his collaborators I.G. Irastorza and S. Scopel, with the same set of parameters. The values used for the halo model are $`\rho =0.3`$ GeV cm<sup>-3</sup>, $`v_{\mathrm{rms}}=270\mathrm{K}\mathrm{m}\mathrm{s}^1`$ and $`v_{\mathrm{esc}}=650\mathrm{K}\mathrm{m}\mathrm{s}^1`$. There exist some projects with Ge detectors in different degree of development: HDMS (Heidelberg DM Search) is a small (200 g) Ge detector placed in a well-type large (2 kg) Ge crystal. The goal is to reach a background of $`10^2`$ c/(keV kg day). GEDEON (Germanium Diodes in One Cryostat, Zaragoza / USC / PNNL) will use a single cryostat of IGEX technology hosting a set of natural Ge crystals of total mass of 28 kg. The threshold of each small detector is $`<2`$ keV and the background goal—expected from the measured radioimpurities—is $`10^210^3`$ c/(keV kg day). A set of three cryostats ($`80`$ kg of Germanium) is the planned final configuration which, embedded in Roman lead and graphite, will be installed in Canfranc. GENIUS (Germanium Detectors in Liquid Nitrogen in an Underground Set-up) plans to operate 40 natural abundance, naked germanium crystals (of 2.5 kg each) submerged directly in a tank of liquid nitrogen. ## 4 Sodium Iodine experiments The full isotopic content on A-odd isotopes (Na-23, I-127) makes sodium iodine detectors sensitive also to spin-dependent WIMP interactions. The main recent interest in scintillators is due to the fact that large masses of NaI crystals for exploring the annual modulation are affordable. There exist four NaI experiments running (UKDMC, DAMA, SACLAY and ELEGANTS V) and two in preparation (ANAIS and NAIAD). The NaI experiments serve to illustrate one of the strategies for background discrimination mentioned above. The time shape differences between electron recoils and nuclear recoils pulses in NaI scintillators can be used to discriminate gamma background from WIMPs (and neutrons) because of the shorter time constants of nuclear versus electron recoils. Templates of reference pulses produced by neutron, gamma (and X, $`\beta `$…) sources are compared with the data population pulses (in each energy band) by means of various parameters \[time constants (UKDMC), momenta of the time distribution (DAMA, SACLAY), integrated time profile differences (SACLAY)\]. From this comparison, the fraction of data which could be due to nuclear recoils turns out to be only a few percent, depending on energy, of the measured background \[1 to 3 c/(keV kg day) in DAMA and UKDMC, and of 2 to 10 c/(keV kg day) in SACLAY\], with different degree of success, depending of the experiment and (slightly) on the method used. Due to the drastic background reduction, the exclusion plots obtained from the stripped spectra have surpassed (DAMA, UKDMC) that derived from the (non-manipulated) spectra of the Ge detectors—whose radiopurity is much better than that of NaI. Let us briefly mention the main performances of these experiments. The United Kingdom Dark Matter Collaboration (UKDMC) uses radiopure NaI crystals of various masses (2 to 10 kg) in various shielding conditions (water, lead, copper) in Boulby. Typical thresholds of 4 keV and background (before PSD) of 2–4 c/(keV kg day) (at about threshold) have been obtained. Recent results from NaI crystals of 5 and 2 kg show a small population of pulses (Fig. 3) of an average time constant shorter than that of gamma events and near to that corresponding to neutron-induced recoils, which is not due to instrumental artifact. (For recent results, see I. Liubarsky’s contribution to these Proceedings). Plans of the UKDMC include NAIAD (NaI Advanced Detector) consisting of 50–100 kg in a set of unencapsulated crystals to avoid surface problems and improve light collection. The SACLAY group is carrying out a thorough program of investigation about the virtues and limitations of the pulse shape analysis as far as the statistical background discrimination is concerned. They use, at LSM Frejus, a radiopure 9.7 kg NaI crystal with an energy threshold of 2 keV and backgrounds of (before PSD) 8–10 c/(keV kg day) (at 2–3 keV) and of $`2`$ c/(keV kg day) (and flat) above 5 keV. The high background at threshold, not well understood, has spoiled the exclusion plots of this experiment, compared with other NaI searches. On the other hand, their data—as it happened in that of UKDMC—cannot be sharply split into Compton plus nuclear recoils showing a spurious population (Fig. 4), a fact that limits the sensitivity of the PSA they could perform. Including this peculiar situation, as systematic effects, their PSA background reduction is only 65% to 85%. The DAMA experiment uses also NaI crystals of 9.7 kg with energy threshold of $`2`$ keV. No spurious population is found in DAMA which could spoil the PSA separation background/nuclear recoils. In fact, their background reduction reaches levels of 85% (4–6 keV) and 97% (12–20 keV), providing exclusion plots which have surpassed that of germanium. In conclusion, besides the significant reduction of background provided by this statistical method, a most intriguing result, as stated above, is that UKDMC and SACLAY, applying these PSD techniques to their data, have found that they are not compatible with a contribution of only Compton nor with nuclear recoil events, and suggest the existence of an unknown population or systematic effects. It is also an intriguing coincidence that the energy spectrum of that residual populations are very similar in both experiments (Fig. 5). (See G. Gerbier’s contribution to these Proceedings). ## 5 WIMPs Annual Modulation as a distinctive signature The Earth rotation around the sun combined with the sun velocity through the halo makes the velocity of the detector with respect to the WIMPs halo a yearly periodic function peaked in June (maximum) and December (minimum). Such seasonal variation should provide an annual modulation in the WIMP interaction rates and in the deposited energy as a clear signature of the WIMP. However, the predicted modulation is only a few percent of the unmodulated, average signal. To reveal this rate modulation, large detector masses/exposures are needed. Such type of seasonal modulation in the spectra has been reported in the DAMA experiment and attributed by the Collaboration to a WIMP signal (see P.L. Belli’s contribution to these Proceedings). Preceding the DAMA experiment, there have been various attempts to search for annual modulation of WIMP signals, starting as early as in 1991. COSME-1 and NaI-32 (Canfranc) (with 2 years statistics), DAMA-Xe (Gran Sasso), ELEGANTS-V (Kamioka), and more recently DEMOS (Sierra Grande) (with 3 years of statistics), are examples of seasonal modulation searches with null results. However, these experiments produced results which improved the $`\sigma `$(m) exclusion plots and settled the conditions and parameters for new, more sensitive searches. The DAMA experiment uses a set of 9 radiopure NaI crystals of 9.7 kg each, viewed by two low-background PMT at each side through light guides (10 cm long), coupled to the crystal. Special care has been taken in controlling the stability of the main experimental parameters. A noise removal is done by using the timing behaviour of noise and true NaI pulses (with 40% efficiency at low energy). The background after noise removal is, averaging over the detectors: B (2–3 keV) $``$ 1–0.5 c/keV kg day and B (3–20 keV) $``$ 2 c/keV kg day. Notice the drop in the first two channels, precisely where the expected signal is more significant and, consequently, essential for deriving the reported modulation effect. (The Pulse Shape Analysis is not used in the annual modulation search). The multidetector energy spectrum of single hit events (each detector has twelve detectors as active veto) has been reported to this Workshop (Fig. 6) (see P.L. Belli’s contribution to these Proceedings). At the time of TAUP 99, DAMA had issued the results of two runnings. Run 1 reported at TAUP 97, extended over 1 month in winter and 2 weeks in summer, i.e. a total of 4549 kg day. Run 2, which used a slightly different setup, extended from November to July, (one detector for 90 days and eight detectors for 180 days) i.e. a total of 14962 kg day. Both DAMA 1 and 2 results are compatible and consistent with each other. A likelihood method applied to the total statistics of 19511 kg day provides a minimum for: $`m=\left(59_{14}^{+17}\right)`$ GeV, $`\sigma ^\mathrm{p}=\left(7.0_{14}^{+17}\right)\times 10^6`$ pb as the most likely values of the mass and interaction cross-section on proton of an “annual oscillating” WIMP (at 99.6% C.L.). Other statistical approaches essentially agree with the likelihood result. Fig. 7 shows the so-called DAMA region of the positive modulation effect and the scattered plot of the MSSM prediction of $`\sigma ^\mathrm{p}`$ in function of m. The time evolution of the DAMA signal has been recently issued by the collaboration indicating an oscillating trend in spite of the small exposure time and the discontinuity of the two runnings (Fig. 8). The DAMA results have aroused great interest but also critical comments related to various aspects of the experiment. One of the most frequently heard has to do with the drop of background in the first energy bins or to the peculiar look of the residual background after subtracting the likelihood-emerging signal. Obviously the delivery of new data is eagerly awaited. Independently of other considerations and beyond any controversies whatsoever, it is imperative to confirm the DAMA results by other independent experiments with NaI (like ANAIS or ELEGANTS) and with other nuclear target. The ZARAGOZA group is preparing ANAIS (Annual Modulation with NaIs), consisting of 10 NaI scintillators of 10.7 kg each, stored for more than ten years in Canfranc, and recently upgraded for DM searches. It will be placed in Canfranc within a shielding of electroformed copper and a large box in Roman lead, plus a neutron screen and an active veto. The tests of a smaller set are underway. Expected performances are an energy threshold of 2 keV and a background at threshold of $``$ 2–3 c/(keV kg day). The OSAKA group is performing a search with the ELEGANTS V NaI detector in the new underground facility of Otho, with a huge mass of NaI scintillators (760 kg) upgraded from previous experiments. A search for annual modulation with null result has been presented to this Workshop (see S. Yoshida’s contribution to these Proceedings). The DAMA $`\sigma `$(m) region should also be explored by the standard method of comparing theory with the total time-integrated experimental spectrum (without enquiring about possible variations in time). In fact, various experiments are reaching the DAMA region (below $`\sigma ^\mathrm{p}10^510^6`$ pb for WIMPs of 40–80 GeV) which is itself half-excluded by a previous DAMA-0 running data using PSA discrimination. For instance, CDMS has reached the upper left corner and exclude it (see Fig. 2 and R. Gaitskell’s contribution to these Proceedings), whereas the IGEX and H/M germanium experiments are very close to it (with direct, non-stripped data). Due to what is at stake, it is important to know what the prospects of WIMP detection are trough the annual modulation signature for planning the right experiments. In fact, to find an unambiguous, reproducible and statistically significant modulation effect is, by now, the best identifying label of a WIMP. Sensitivity plots for modulation searches (presented to this Workshop) give the MT exposure needed to explore $`\sigma `$,m regions using the annual modulation signature or, equivalently, needed to detect an effect (should it exist) at a given C.L., due to a WIMP with a mass m and cross-section $`\sigma ^\mathrm{p}`$. Examples of sensitivity plots for Ge, NaI and TeO<sub>2</sub>—and the ensuing capability to explore $`\sigma ^\mathrm{p}`$,m regions, are given to illustrate the modulation research potential of some detectors (see S. Scopel’s contribution to these Proceedings) running or in preparation. ## 6 Cryogenic Particle Detectors In the WIMP scattering on matter, only a small fraction of the energy delivered by the WIMP goes to ionization, the main part being released as heat. Consequently, thermal detectors should be suitable devices for dark matter searches with quenching factors of about unity and low effective energy threshold ($`\mathrm{E}_{\mathrm{vis}}\mathrm{T}`$) as WIMPs searches require. Moreover, bolometers which also collect charge (or light) can simultaneously measure the phonon and ionization (or scintillation) components of the energy deposition providing a unique tool of background subtraction and particle identification. There exist five experiments searching for direct interactions of WIMPs with nuclei based on thermal detection currently running (MIBETA, CDMS, EDELWEISS, CRESST, and ROSEBUD) another one, CUORICINO, being mounted and a big project, CUORE, in preparation. The CRESST (Cryogenic Rare Event Search with Superconducting Thermometers) (MPI Munich / TUM Garching / Oxford, Gran Sasso) detectors are four sapphire crystals (Al<sub>2</sub>O<sub>3</sub>) of 262 g each with a tungsten superconducting transition edge sensor. The energy resolution and threshold obtained with an x-ray fluorescence source are respectively 133 eV (at 1.5 keV) and 500 eV. The background obtained in the Gran Sasso running is of about 10–15 counts/(keV kg day) above 30 keV, going down to 1 c/(keV kg day) above 100 keV, whereas below 30 keV the spectrum is largely dominated by noise and other spurious sources preventing to derive exclusion plots. Recently the collaboration has performed simultaneous measurements of scintillation and heat, with a 6 g CaWO<sub>4</sub> crystal as absorber. The preliminary results indicate a rejection of electron recoil events with an efficiency greater than 99.7% for nuclear recoil energies above 15 keV. Short term prospects for CRESST are the implementation of the scintillation-phonon discrimination of nuclear recoils in a CaWO<sub>4</sub> detector of 1 kg (see J. Jochum’s contribution to these Proceedings). ROSEBUD (Rare Objects Search with Bolometers Underground) \[University of Zaragoza and IAS (Orsay)\] is another sapphire bolometer experiment to explore the low energy (300 eV–10 keV) nuclear recoils produced by low mass WIMPs. It is currently running in Canfranc (at 2450 m.w.e.). It consists of two 25g and one 50g selected sapphire bolometers (with NTD (Ge) thermistors) operating inside a small dilution refrigerator at 20 mK. One of the 25g sapphire crystals is part of a composite bolometer (2 g of LiF enriched at 96% in <sup>6</sup>Li glued to it) to monitor the neutron background of the laboratory. The inner (cold) shielding and the external one are made of archaeological lead of very low contamination. The experimental setup is installed within a Faraday cage and an acoustic isolation cabin, supported by an antivibration platform. Power supply inside the cabin is provided by batteries and data transmission from the cabin through convenient filters is based on optical fibers. Infrared (IR) pulses are periodically sent to the bolometers through optical fibers in order to monitor the stability of the experiment. Pumps have vibration-decoupled connections. The first tests in Canfranc have shown that microphonic and electronic noise level is quite good, about 2nV/Hz<sup>1/2</sup> below 50Hz. The bolometers were tested previously in Paris (IAS) showing a threshold of 300 eV and energy resolution of 120 (at 1.5 keV). Typical sensitivities obtained (in Canfranc) are in the range of 0.3–1 $`\mu `$V/keV. Overall resolutions of 3.2 and 6.5 keV FWHM were typically obtained in Canfranc with the 50 g and 25 g bolometers, respectively, at 122 keV. Low energy background pulses corresponding to energies below 5 keV are seen. In the test runnings, the background obtained was as large as 120 counts/keV/kg/day around 40–80 keV. After various modifications in the cryostat components, the background level of the 50 g bolometer stands about 15 counts/keV/kg/day from 20 to 80 keV. This progressive reduction is illustrated in Fig. 9. Measurements of the radiopurity of individual components continue with an ultralow background Ge at Canfranc and their removal done when needed with the purpose of lowering the background one more order of magnitude. The next step of the ROSEBUD program will deal with bolometers of sapphire and germanium, operating together to investigate the target dependence of the WIMP rate (see P. de Marcillac’s contribution to these Proceedings). The Cryogenic Dark Matter Search Collaboration (CDMS) (CfPA / UC Berkeley / LLNL / UCSB / Stanford / LBNL / Baksan / Santa Clara / Case Western / Fermilab / San Francisco State) has developed bolometers which collect also electron-holes carriers for discriminating nuclear recoils from electron recoils. The electron-hole pairs are efficiently collected in the bulk of the detector, but the trapping sites near the detector surface produce a layer ($`1020\mu \mathrm{m}`$) of poor charge collection, where surface electrons from outside suffer ionization losses and fake nuclear recoils. Two types of phonon readout have been developed. In the BLIP (Berkeley Large Ionization and Phonon) detector, a NTD Ge thermistor reads the thermal phonons in milliseconds. In the FLIP (Fast Large Ionization and Phonon) detectors, non-equilibrium, athermal phonons are detected (microsecond time scale) with superconducting transition edge thermometers in tungsten. Current prototypes are BLIPs of 165 g Germanium and FLIPs of 100 g Silicon. The FWHM energy resolutions are 900 eV and 450 eV respectively in the phonon and charge channels in BLIP detectors and about 1 keV in the FLIPs. The electron nuclear recoil rejection in both detectors is larger than 99% above 20 keV recoil energy. Backgrounds below 0.1 c/(keV kg day) in the 10–20 keV energy region have been obtained in recent runs. Following new developments achieved in the detectors, the surface events have been successfully discriminated using their phonon rise time: the low-charge collection events (surface electrons) have been proved to have faster phonon rise time than the bulk events. A rise time cut is applied to get rid of them. Results from a recent run are depicted in Fig. 10. In spite of the small masses and short time runs, CDMS exclusion plots are competitive with much larger exposures of other detectors. In fact, CDMS is now probing the DAMA region, as reported to this Workshop (see Fig. 2 and R. Gaitskell’s contribution to these Proceedings). Projects of the CDMS Collaboration include the transfer of the FLIP technology to germanium crystals of 250 g. Planned exposure at Stanford (only 17 m of overburden) is of 100 kg day, with a background goal of $`\mathrm{B}=0.01`$ c/keV kg day. The experiment will be moved to Soudan along the year 2000 with twenty FLIP detectors of Germanium (250 g each) and the background goal of $`\mathrm{B}=\mathrm{few}\times 10^4`$ c/keV kg day. EDELWEISS (Orsay/Lyon/Saclay/LSM/IAP) has operated two 70 g HP Ge bolometer in the Frejus tunnel with heat-ionization discrimination getting similar results to that of the BLIP detectors of CDMS and so they will not be repeated here. The background obtained is $`\mathrm{B}=0.6`$ c/keV kg day in the $`1270`$ keV region of the recoil energy spectrum. The rejection is 98% for surface events and $`>99.7`$% for internal events. (For more details, see G. Chardin contribution to these Proceedings). The collaboration is preparing a small tower of three Ge bolometers of 70 g each, to be enlarged to other three of 320 g each. The background goal is to get $`\mathrm{B}=10^2`$ c/keV kg day which seems to be at hand. A second phase of EDELWEISS (2000–2001) will use a reverse dilution refrigerator of 100 liters now under construction to host $`50100`$ detectors. Twenty Ge detectors of 300 g will be placed in the next two years, expecting to improve the rejection up to 99.99% and get a background of $`10^4`$ c/(keV kg day). CUORE (Cryogenic Underground Observatory for Rare Events) (Berkeley / Florence / LNGS / Leiden / Milan / Neuchatel/ South Carolina / Zaragoza) is a project to construct a large mass (775 kg) modular detector consisting of 1020 single bolometers of TeO<sub>2</sub> of dimensions $`5\times 5\times 5\mathrm{cm}^3`$ and 760 g each, with glued NTD Ge thermistors, to be operated at 7 mK in the Gran Sasso Laboratory. A tower of 14 planes consisting of 56 of those crystals with a total mass of 42 kg, the so-called CUORICINO detector, will be a first step in the CUORE project. Preliminary results of a 20 crystal array of tellurite bolometers (340 g each) (MIBETA experiment) optimized for $`2\beta `$ decay searches show energy thresholds ranging from 2 to 8 keV (depending on the detector) and background levels of a few counts per keV kg day in the 15–40 keV low energy region. CUORICINO is planned as an extension of the MIBETA setup featuring more and larger crystals. The sum of the 20 contemporary calibration spectra with a single <sup>232</sup>Th source shows that the array is indeed acting as a single detector. Four bolometers of the future CUORICINO array have been recently tested in Gran Sasso. The results on the energy resolution in the region of neutrinoless double beta decay of <sup>130</sup>Te (2500 keV) are about $`58`$ keV (see M. Pavan’s contribution to these Proceedings). Other values obtained are $`2`$ keV at 46 keV and 4.2 keV at 5400 keV. Energy resolutions of 1-2 keV and backgrounds of $`10^2`$ c/(keV kg day) in the few keV region can be expected. Fig. 11 shows the exclusion contour obtained from running experiments (Ge, NaI), and the projections for GEDEON, CUORICINO and CDMS assuming the parameter values expected in such experiments. ## 7 Where we stand and where we go Unrevealing the nature of the dark matter is of uttermost importance in Cosmology, Astrophysics and Particle Physics. It has triggered a large experimental activity in searching for all its possible forms, either conventional or exotic. In particular, there exist various large microlensing surveys looking for dark baryons (EROS, MACHO, OGLE…) and a variety of observations searching for the baryonic component of dark matter. (See J. Usón’s contribution to these Proceedings). As far as the exotic, non-baryonic objects are concerned, a few experiments are looking (or project to look) for axions (RBF/UF, LIVERMORE, KYOTO, CRYSTALS, CERN Solar Axion Telescope Antenna…), as reviewed in P. Sikivie’s contribution to these Proceedings. In the WIMP sector (see L. Roszkowski’s contribution to these Proceedings) to which this experimental overview is dedicated, there are various large underground detector experiments (MACRO, BAKSAN, SOUDAN, SUPER-K…) and deep underwater (ice) neutrino telescopes (AMANDA, BAYKAL, ANTARES, NESTOR…) looking for (or planning to look for) neutrino signals originated by the annihilation of WIMPs, as well as some balloons and satellite experiments looking for antimatter of WIMP origin, most of them included in these Proceedings. About thirty experiments either running or being prepared are looking for WIMPs by the direct way (COSME, IGEX, HEIDELBERG/MOS COW, ELEGANTS-V and VI, DAMA, SACLAY, UKDMC, ANAIS, TWO-PHASE Xe, LqXe, CASPAR, SIMPLE, MICA, DRIFT, CRESST, ROSEBUD, MIBETA, CUORICINO, CDMS, EDELWEISS, ORPHEUS…), with conventional as well as with cryogenic techniques… and some large projects with 100 to 1000 detectors (CUORE, GEDEON, GENIUS…) are being initiated. Their current achievements and the projections of some of them have been shown in terms of exclusion plots $`\sigma ^\mathrm{p}`$ (m), which illustrate the potential to investigate the possible existence of WIMP dark matter in regions pretty close to where the supersymmetric candidates must appear. After witnessing the large activity and progress reported to this Workshop, it is clear that the main strategies recommended to search for WIMPs have proved to be quite efficient to reduce the window of the possible particle dark matter and to approach the zone of the more appealing candidates and couplings. Examples of the achievements in radiopurity, background identification or rejection, in low (effective) threshold energy and efficiency, as well as in investigating the genuine signatures of WIMPs, like modulation and directionality, have been largely reported to TAUP 99 and reviewed selectively in this paper. The conclusion is that these strategies are well focused and should be further pursued. Finally, an annual modulation effect—supposedly produced by a WIMP—is there, alive since the last TAUP 97, waiting to be confirmed by independent experiments. ## Acknowledgements I wish to thank the spokespersons of the experiments presented at TAUP 99 for making available to me in advance their contributions as well as other useful information about the status and plans of their experiments. I am indebted to my collaborators I.G. Irastorza and S. Scopel for their contribution to the making of the exclusion plots. The kindness of my collaborators of COSME, IGEX and ROSEBUD for allowing me to use the data from these experiments is warmly acknowledged. Thanks are due also to my CUORICINO colleagues for their permission to use internal information on the status and preliminary results of the experiment. Finally, I thank Mercedes Fatás for her patience and skill in the composition of the text. The financial support of CICYT (Spain) under grant AEN99-1033 and the European Commission (DGXII) under contract ERB-FMRX-CT-98-0167 is duly acknowledged.
no-problem/9912/astro-ph9912396.html
ar5iv
text
# THE EVOLUTIONARY STATUS OF SS433 ## 1 INTRODUCTION The nature of the unusual binary system SS 433 has been an interesting question ever since the recognition (Abell & Margon, 1979; Milgrom, 1979; Fabian & Rees, 1979) that the system drives precessing jets with velocities $`0.26c`$. In particular, although the binary period $`P=13.1`$ d has long been known (Crampton & Hutchings, 1981), there is no consensus about the component masses since all radial–velocity studies have to use emission lines, and one cannot be sure that the measured velocities are those of either star. The heavy extinction towards the object makes estimates of the companion’s spectral type and luminosity difficult; dereddening with the usual values $`A_V7`$ based on comparison of infrared and H$`\alpha `$ intensities (e.g. Giles et al., 1979) suggests the presence of a Rayleigh–Jeans continuum in the optical, and thus possibly an early–type companion. Estimating the mass ratio from the duration of the observed X–ray eclipse (Kawai et al., 1989) is also difficult, as we know that the X–rays come from the moving jets (Watson et al., 1986) and may therefore be extended. The assumptions that the jets are partly obscured by the accretion disc and by the companion during eclipse lead (D’Odorico et al., 1991) to estimates $`q=M_2/M_14`$, where $`M_2,M_1`$ are the masses of the companion and compact star respectively, consistent with the presence of an early–type companion. Observational hints that SS 433’s companion is relatively massive ($`q1`$) have, until recently, presented a dilemma to theorists. A large mass ratio would put SS 433 in a state often invoked in binary evolution scenarios. This situation tends to lead to high mass transfer rates, with a large fraction of the donor’s mass being transferred on its thermal timescale if its envelope is predominantly radiative, and even more rapidly if the envelope is convective. The resulting short mass transfer lifetime for a fairly massive donor (i.e., $`q1`$) offers a simple explanation for the uniqueness of SS 433. Furthermore, a high mass transfer rate is indicated by the mass loss rate $`\dot{M}_{\mathrm{jet}}10^6\mathrm{M}_{}`$ yr<sup>-1</sup> in the precessing jets (Begelman et al., 1980). However, this is also a potential problem: if the companion is indeed relatively massive, even thermal–timescale mass transfer leads to rates far in excess of the Eddington limit for an accretor of a few $`\mathrm{M}_{}`$, in excess even of the inferred mass-loss rates in the jets. Conventional wisdom has until recently suggested that such rates inevitably lead very quickly to common–envelope (CE) evolution, in which the compact object can neither accrete nor expel the transferred matter rapidly enough to prevent the formation of an envelope around the entire system. This would then probably appear as a giant, and certainly not be recognizable as an accreting binary. Since SS 433 is not yet in such a state, the predicted lifetime for its current state would become embarassingly short, requiring very high space densities of similar systems. The problem is only slightly eased by abandoning the assumption $`q1`$, since the mean density of the companion is essentially fixed by the requirement that it should fill its Roche lobe in a binary with a period of 13.1 d (mass transfer through stellar wind capture is very unlikely to give the high mass transfer rates inferred). A companion star in the process of crossing the Hertzsprung gap is the only likely possibility, as a main–sequence companion would have to be improbably massive (see eqs 4, 5 below) while nuclear–timescale mass transfer from a giant companion would give far too low a transfer rate. This then leads back to mass transfer on something like the thermal time of the expanding companion star; while this is somewhat milder than in the case $`q1`$, it would still be well above the values hitherto thought likely to produce a common envelope. Recent work on the neutron–star X–ray binary Cygnus X–2 (King & Ritter, 1999; Podsiadlowski & Rappaport, 1999) offers a way out of this dilemma: it is evident that this system has survived an episode of thermal–timescale mass transfer resulting from an initial mass ratio $`q_i1`$ without entering CE evolution. The aim of this paper is to investigate whether this is possible in SS 433 also, and thus to discover its likely evolutionary status. ## 2 AVOIDANCE OF COMMON ENVELOPE EVOLUTION King & Ritter (1999) show that the progenitor of Cygnus X–2 must have transferred a mass $`3\mathrm{M}_{}`$ at rates $`\dot{M}_{\mathrm{tr}}10^6\mathrm{M}_{}`$ yr<sup>-1</sup> (the Case AB evolution suggested by Podsiadlowski & Rappaport, 1999 leads to a similar requirement). This greatly exceeds the Eddington rate $`\dot{M}_{\mathrm{Edd}}=L_{\mathrm{Edd}}/c^2`$ for a $`1.4\mathrm{M}_{}`$ neutron star. This star evidently accreted only a tiny fraction of the transferred mass, expelling the rest from the system entirely. The obvious agent for this is radiation pressure. King & Begelman (1999) (hereafter KB99) suggest that expulsion occurs from the ‘trapping’ radius $$R_{\mathrm{ex}}\left(\frac{\dot{M}_{\mathrm{tr}}}{\dot{M}_{\mathrm{Edd}}}\right)R_S1.3\times 10^{14}\dot{m}_{\mathrm{tr}}\mathrm{cm},$$ (1) where $`R_S`$ is the Schwarzschild radius and $`\dot{m}_{\mathrm{tr}}`$ is the transfer rate expressed in $`\mathrm{M}_{}`$ yr<sup>-1</sup>. (Note that $`R_{\mathrm{ex}}`$ is independent of $`M_1`$ since both $`\dot{M}_{\mathrm{Edd}}`$ and $`R_S`$ scale as $`M_1`$.) Within this radius advection drags photons inward, overcoming their outward diffusion through the matter. If the matter has even a small amount of angular momentum most of it is likely to be blown away as a strong wind from $`R_{\mathrm{ex}}`$: the gravitational energy released by accretion at $`\dot{M}_{\mathrm{Edd}}`$ deep in the potential of the compact star is used to expel the remainder ($`\dot{M}_{\mathrm{tr}}\dot{M}_{\mathrm{Edd}}`$) of the infall, which is only weakly bound at distances $`R_{\mathrm{ex}}`$ (cf Blandford & Begelman 1999). KB99 suggest that CE evolution is avoided provided that $`R_{\mathrm{ex}}`$ is smaller than the accretor’s Roche lobe radius $`R_1`$. If the accretor is the less massive star this is given approximately by $$R_1=1.3\times 10^{11}m_1^{1/3}P_\mathrm{d}^{2/3}\mathrm{cm},$$ (2) where $`m_1=M_1/\mathrm{M}_{}`$ and $`P_\mathrm{d}`$ is the binary period in days. Since this is related to the Roche lobe radius $`R_2`$ of the companion star via $$\frac{R_2}{R_1}\left(\frac{M_2}{M_1}\right)^{0.45},$$ (3) (cf King et al., 1997) KB99 were able to determine whether Roche–lobe overflow from various types of companion star would lead to CE evolution or not. They concluded that CE evolution was unlikely for mass transfer from any main–sequence or Hertzsprung gap star with $`q1`$ provided that its envelope was largely radiative. In the next section we investigate possible companion stars in SS 433. ## 3 THE EVOLUTION OF SS 433 If the companion star in SS 433 is more massive than the compact accretor ($`q>1`$) and fills its Roche lobe, then from (2) and (3) it has radius $$R_2=10\mathrm{R}_{}m_1^{0.12}m_2^{0.45}\left(\frac{P_\mathrm{d}}{13.1}\right)^{2/3}.$$ (4) In the opposite case ($`q1`$) we have instead $$R_2=10\mathrm{R}_{}m_2^{0.33}\left(\frac{P_\mathrm{d}}{13.1}\right)^{2/3}.$$ (5) In either case this is obviously a fairly extended star, and clearly well above the upper main sequence mass–radius relation for any realistic mass $`M_2=m_2\mathrm{M}_{}`$. By the reasoning of the previous section, the only possible companions are stars which came into contact with the Roche lobe as they crossed the Hertzsprung gap. Since such stars are by definition out of thermal equilibrium, we cannot assume that their structure is given by that of a single star of the same instantaneous mass (although this is approximately true for companions of modest mass $`m_23.5`$ with $`q1`$; Kolb, 1998). Instead we must follow their evolution under mass loss explicitly. The lack of clear dynamical mass information means that there is considerable freedom in trying to fit the current state of SS 433, and one cannot expect to find a unique assignment. We restricted the evolutions we considered to those satisfying the following list of conditions at the current epoch: 1. $`P_d13.1`$ 2. Mass transfer rate $`\dot{M}_{\mathrm{tr}}>\dot{M}_{\mathrm{jet}}10^6\mathrm{M}_{}`$ yr<sup>-1</sup> 3. $`R_{\mathrm{ex}}<R_1`$ 4. The companion does not have a deep convective envelope (which would make CE evolution inevitable). In practice this means that the stellar effective temperature should typically exceed a value $`6000`$ K which in turn requires an initial companion mass $`M_{2i}4\mathrm{M}_{}`$ 5. The time since mass transfer exceeded the Eddington limit is $`t_010^3`$ yr. Condition 5 comes from observations of the surrounding W50 nebula. If this is attributed to interaction with the jets one finds $`t_010^3`$ yr, assuming that the jets were produced promptly (Begelman et al., 1980; Königl, 1983). We calculated evolutionary models satisfying these conditions using the code developed by Eggleton (1971, 1972), with the Roche lobe radius given (more accurately than 4, 5 above) by $$\frac{R_2}{a}=\frac{0.49q^{2/3}}{0.6q^{2/3}+\mathrm{ln}(1+q^{1/3})},$$ (6) with $`a`$ the binary separation and a mass transfer formulation as described in Ritter (1983). In all cases the mass transfer rate is highly super–Eddington; we assume that the transferred mass is lost from the binary with the specific angular momentum of the compact accretor. From eq. 5 of King & Ritter (1999) it is easy to show that the orbital period decreases for mass ratios $`q>q_{\mathrm{crit}}1.39`$, and increases for smaller $`q`$. Because higher mass stars drive higher rates of mass transfer, the companion star mass is limited to $`12\mathrm{M}_{}`$ for otherwise birth rate requirements for systems similar to SS 433 would become severe (see below). Table 1 shows our results for systems characterized by an initial orbital period of 13.1 d. ## 4 DISCUSSION Table 1 provides only a coarse sampling of the parameter space of possible evolutionary models for SS 433. However we can already make some interesting statements. For a given initial mass ratio $`q_i=M_{2i}/M_1`$, (e.g. sequences 2 and 4) higher masses imply higher mass loss rates, and hence that more mass is lost after a given time ($`10^3`$ yr). We can understand this as a consequence of the shorter thermal timescale for higher–mass stars. The orbital period decrease is greater for higher masses as a result. (Note that sequence 1 has $`q_i`$ very close to $`q_{\mathrm{crit}}`$.) For a given initial companion mass $`M_{2i}`$ (compare sequences 1 - 3) decreasing the compact object mass produces the same trends, because it amounts to increasing the mass ratio further above $`q_{\mathrm{crit}}`$. For a neutron–star accretor (sequence 5) the larger mass ratio wins out over the smaller companion mass, again producing the same trends. This implies that a neutron–star accretor provides a good fit to the present state of SS 433 only if the companion has quite a low mass ($`45\mathrm{M}_{}`$) in a fairly narrow range: masses much lower than this imply large convective mass fractions at or soon after the start of mass transfer, and thus CE evolution. This results in a dynamical instability or a delayed dynamical instability as discussed by Webbink (1977), Hjellming & Webbink (1987), and Hjellming (1989). For companion stars more massive than about $`5\mathrm{M}_{}`$ with a neutron star companion, binary evolution leads to a decrease of the orbital separation and period even after 1000 yrs, excluding such systems as candidate progenitor systems for SS 433. For longer initial periods, the companion is likely to have a convective envelope at the onset of mass transfer, and thus enter a CE stage. Depending on the component masses, mass transfer rates $`\dot{M}_{\mathrm{tr}}7\times 10^64\times 10^4\mathrm{M}_{}`$ yr<sup>-1</sup> are typical for a system with a period of 13.1 d, and can evidently be ejected without causing the onset of CE evolution provided that the companion is predominantly radiative. The resulting mass transfer lifetimes are in the range $`10^410^5`$ yr, given largely by the thermal timescale of the companion. They cannot be much greater than this, since this requires companion masses $`4\mathrm{M}_{}`$, which are subject to a dynamical instability. The birthrate requirement for systems like SS 433 is thus of order $`10^410^5`$ yr<sup>-1</sup> in the Galaxy. The predicted mass transfer rates $`\dot{M}_{\mathrm{tr}}7\times 10^64\times 10^4\mathrm{M}_{}`$ yr<sup>-1</sup> are in all cases much larger than the likely mass–loss rate $`\dot{M}_{\mathrm{jet}}10^6\mathrm{M}_{}`$ yr<sup>-1</sup> in the precessing jets (Begelman et al., 1980). We expect that the jets are ejected from a region no larger than $`R_{\mathrm{jet}}(\dot{M}_{\mathrm{jet}}/\dot{M}_{\mathrm{tr}})R_{\mathrm{ex}}1\times 10^8`$ cm; their quasi-relativistic velocity suggests that they emerge from a smaller radius, i.e., a few times the neutron–star radius $`10^6`$ cm or the black–hole Schwarzschild radius $`3\times 10^5m_1`$ cm. The jets constitute just the innermost part of the mass expulsion from the accretion flow: almost all of the transferred mass is lost from larger radii, $`R_{\mathrm{ex}}`$. In addition to accounting for the very low observed X–ray luminosity (which probably comes entirely from the jets), this expulsion of matter is presumably the source of the ‘stationary’ H$`\alpha `$ line and the associated free–free continuum seen in the near infrared (Giles et al., 1979). The emission measure $`VN_e^210^{61}`$ cm<sup>-3</sup> of the latter is consistent with this, as the likely radius $`R10^{15}`$ cm of this region (Begelman et al., 1980) implies $`N_e10^8`$ cm<sup>-3</sup>, and thus outflow rates as high as $$\dot{M}_{\mathrm{out}}4\pi R^2vN_em_H2.8\times 10^3\mathrm{M}_{}\mathrm{yr}^1,$$ (7) where $`v1000`$ km s<sup>-1</sup> is the velocity width of the H$`\alpha `$ line. The very high mass transfer rates encountered in these calculations make it difficult to follow them to their natural endpoints, as assumptions such as synchronous rotation of the donor begin to break down. However the main outlines of the future evolution of systems are fairly clear, provided that the system does not enter a CE phase as a result of a delayed transition from thermal to dynamical timescale mass transfer. (This delayed dynamical instability is avoided in higher mass systems with mass ratio close to unity, again tending to favor black–hole systems. Sequence 5 is likely to encounter this instability, cf King & Ritter, 1999). An initial mass ratio $`q1`$ implies that the Roche lobe will shrink before the mass ratio reverses, and thus that the current mass transfer phase is likely to end with a fairly tight system consisting of the black hole or neutron star accretor (mass effectively unchanged) and the helium core of the donor (cf King & Ritter, 1999). For $`M_2<12\mathrm{M}_{}`$ at the onset of mass transfer the core has mass $`M_{\mathrm{He}}2\mathrm{M}_{}`$. This star will re–expand through helium shell–burning, and will probably initiate a further short mass transfer phase (so–called Case BB; Delgado & Thomas, 1981; Law & Ritter, 1983), depending on the binary separation. The donor will end its life as a CO white dwarf. The binary may be tight enough for coalescence to occur because of gravitational radiation losses within $`10^{10}`$ yr. Systems of this type would therefore be good candidates for gamma–ray burst sources, although detailed evolutionary calculations to check the scenario sketched here are clearly required before we can make this statement with any confidence. In particular, it would be premature to translate the predicted birthrates $`10^410^5`$ yr<sup>-1</sup> into a predicted gamma–ray burst rate. This research was begun at the Institute for Theoretical Physics and supported in part by the National Science Foundation under Grant No. PHY94–07194. ARK gratefully acknowledges support by the UK Particle Physics and Astronomy Research Council through a Senior Fellowship. RT acknowledges support from NSF grant AST97–27875 MCB acknowledges support from NSF grants AST95–29170, AST98–76887 and a Guggenheim Fellowship.
no-problem/9912/astro-ph9912016.html
ar5iv
text
# The ASCA Hard Serendipitous Survey (HSS): a Progress Update ## 1. Introduction At the Osservatorio Astronomico di Brera we started a few years ago the ASCA Hard Serendipitous Survey (HSS): a systematic search for sources in the $`210`$ keV energy band, using data from the GIS2 instrument onboard the ASCA satellite. The specific aims of this project are: a) to extend to faint fluxes the census of the X-ray sources shining in the hard X-ray sky, b) to evaluate the contribution to the Cosmic X-ray Background (CXB) from the different classes of X-ray sources, and c) to test the Unification Model for AGNs. This effort has lead to a pilot sample of 60 sources that has been used to extend the description of the number-counts relationship down to a flux limit of $`6\times 10^{14}`$ erg cm<sup>-2</sup> s<sup>-1</sup> (the faintest detectable flux) resolving directly about 27% of the (2 \- 10 keV) Cosmic X-ray Background (CXB), and to investigate their X-ray spectral properties (Cagnoni, Della Ceca and Maccacaro, 1998; Della Ceca et al., 1999). Recently the ASCA HSS has been extended: we discuss here this extension and the main results obtained so far. ## 2. The ASCA HSS Sample The data considered for the extension of the ASCA HSS were extracted from the public archive of 1629 ASCA fields (as of December 18, 1997). The fields selection criteria, the data preparation and analysis, the source detection and selection and the computation of the sky coverage are described in detail in Cagnoni, Della Ceca and Maccacaro (1998) and Della Ceca et al. (1999). The 300 GIS2 images adequate for this project have been searched for sources with a signal-to-noise (S/N) ratio greater than 4.0 (a more restrictive criterion than that adopted in Cagnoni et al., (1998) where a S/N $``$ 3.5 was used). A sample of 189 serendipitous sources with fluxes in the range $`1\times 10^{13}7\times 10^{12}`$ erg cm<sup>-2</sup> s<sup>-1</sup>, found over a total area of sky of $`71`$ deg<sup>2</sup>, has been defined. Full details on this sample will be reported in Della Ceca at al., (2000). ## 3. The 2-10 keV LogN($`>`$S)$``$LogS In Figure 1 we show a parametric (solid line) and a non parametric (solid histogram) representation of the number-flux relationship obtained using the new ASCA HSS sample of 189 sources. Also shown in Figure 1 (cross at $`3\times 10^{11}`$ ergs cm<sup>-2</sup> s<sup>-1</sup>) is the surface density of the extragalactic population in the Piccinotti et al., (1982) HEAO 1 A-2 sample (as corrected by Comastri et al., 1995) and the surface density of X-ray sources as determined by Kondo (1991) using a small sample of 11 sources extracted from the Ginga High Galactic Latitude survey (filled triangle at $`8\times 10^{12}`$ ergs cm<sup>-2</sup> s<sup>-1</sup>). The surface densities represented by the filled dots at $`1.2\times 10^{13}`$, $`1.8\times 10^{13}`$, and $`3.0\times 10^{13}`$ ergs cm<sup>-2</sup> s<sup>-1</sup> are the results from the ASCA Large Sky Survey (Ueda et al., 1999); the filled dot at $`5\times 10^{14}`$ ergs cm<sup>-2</sup> s<sup>-1</sup> has been obtained by Georgantopoulos et al. (1997) using 3 deep ASCA GIS observations; the filled dot at $`4.0\times 10^{14}`$ ergs cm<sup>-2</sup> s<sup>-1</sup> has been obtained from Inoue et al. (1996) using data from a deep ASCA observation. Finally, the filled square at $`5.0\times 10^{14}`$ ergs cm<sup>-2</sup> s<sup>-1</sup> has been obtained by Giommi et al. (1998) using data from the BeppoSAX deep surveys. As it can be seen, our determination of the number-flux relationship is in very good agreement with those obtained from other survey programs. The LogN($`>`$S)-LogS can be described by a power law model N($`>`$S) = $`K\times S^\alpha `$ with best fit value for the slope of $`\alpha =1.63\pm 0.09`$; the dotted lines represent the $`\pm 68\%`$ confidence intervals on the slope. The normalization K is determined by rescaling the model to the actual number of objects in the sample and, in the case of the “best” fit model, is $`K=9.65\times 10^{21}`$ deg<sup>-2</sup>. At the flux limit of the survey ($`7\times 10^{14}`$ ergs cm<sup>-2</sup> s<sup>-1</sup>) the total emissivity of the resolved objects is $`10`$ keV cm<sup>-2</sup> s<sup>-1</sup> sr<sup>-1</sup>, i.e. about 30% of the 2-10 keV CXB. A flattening of the number-flux relationship, within a factor of 10 from our flux limit, is expected in order to avoid saturation. ## 4. The 2-10 keV spectral properties of the sources To investigate the spectral properties of the sources in the 2.0 - 10.0 keV energy range we defined the Hardness Ratio, $`HR2=\frac{HM}{H+M}`$, where H and M are the observed (GIS2 + GIS3) net counts in the 2.0-4.0 keV and 4.0-10.0 keV energy band respectively (see Della Ceca et al.,1999 for details). In Figure 2a, for all sources, we plot the HR2 value versus the GIS2 count rate; we have also reported the flux scale obtained assuming a count rate to flux conversion factor appropriate for a power law model with $`\alpha _E0.6`$, the median energy spectral index of the sample. The HR2 values are then compared with those expected from a non absorbed power-law model with $`\alpha _E`$ ranging from $``$1.0 to 2.0. It is worth noting the presence of many sources which seem to be characterized by a very flat 2-10 keV spectrum with $`\alpha _E0.4`$ and of a number of sources with “inverted” spectra (i.e. $`\alpha _E0.0`$). A flattening of the mean spectrum of the sources with decreasing count rate is clearly evident. If we divide the sample into two subsamples (the bright sample is defined by the 60 sources with a count rate $`4.3\times 10^3`$ cts s<sup>-1</sup>, while the faint sample is defined by the remaining 129 sources), than the fraction of sources with $`\alpha _E0.4`$ ($`\alpha _E0.0`$) is $`15\pm 5\%`$ ($`8\pm 4\%`$) in the bright sample and becomes $`43\pm 7\%`$ ($`18\pm 4\%`$) in the faint sample. These objects with very flat spectra could represent a new population of very hard serendipitous sources or, alternatively, a population of very absorbed sources as expected from the CXB synthesis models based on the AGN Unification Scheme. ## 5. The spectroscopically identified sample Up to now 47 sources have been spectroscopically identified. The optical breakdown is the following: 1 star, 5 cluster of galaxies, 5 BL Lac objects, 33 Broad Line Type 1 AGNs and 3 Narrow Line Type 2 AGNs. However we stress that this small sample of identified objects is probably not representative of the whole population. In Figure 2b we plot the HR2 value versus the GIS2 count rate for this small sample of identified objects. We note that 2 of the 3 objects classified as Type 2 AGNs have an inverted X-ray spectrum in the 2-10 keV band, and that some of the Type 1 AGNs seem to have a very flat ($`\alpha _E0.5`$) spectrum. ## ACKNOWLEDGEMENTS This work received partial financial support from the Italian Ministry for University Research (MURST) under grant Cofin98-02-32 and from the Fondazione CARIPLO. ## REFERENCES Cagnoni, I., Della Ceca, R., and Maccacaro, T., 1998, Ap.J., 493, 54. Comastri, A., Setti, G., Zamorani, G., and Hasinger, G., 1995, A&A, 296, 1. Della Ceca, R., Castelli, G., Braito, V., Cagnoni, I., and Maccacaro, T., 1999, Ap.J., 524, 674. Della Ceca, R., et al., 2000, in preparation. Georgantopoulos, I., et al., 1997, MNRAS, 291, 203. Giommi, P., et al., 1998, Nuclear Physics B (Proc. Suppl.), 69/1-3, 591. Inoue, H., Kii, T., Ogasaka, Y., Takahashi, T., and Ueda, Y., 1996, MPE REP. 263, 323. Kondo, H., 1991, Ph.D. Thesis Univ. of Tokyo. Piccinotti, G., et al., 1982, Ap.J., 253, 485. Ueda, Y., et al., 1999, Ap.J., 518, 656.
no-problem/9912/astro-ph9912388.html
ar5iv
text
# Background and Scattered Light Subtraction in the High-Resolution Echelle Modes of the Space Telescope Imaging SpectrographBased on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute. STScI is operated by the Association of Universities for Research in Astronomy, Inc. under the NASA contract NAS 5-26555. ## 1 Introduction The Space Telescope Imaging Spectrograph (STIS) is a second-generation instrument on the Hubble Space Telescope (HST) designed to offer a wide range of spectroscopic capabilities from ultraviolet ($`\lambda 1150`$ Å) to near-infrared ($`\lambda 11,000`$ Å) wavelengths. The design and construction of STIS are described by Woodgate et al. (1998), while information about the on-orbit performance of STIS is summarized by Kimble et al. (1998). Spectra taken in the visible wavelength regime ($`\lambda 3000`$ Å) are recorded by a large-format (1024$`\times `$1024 pixels) CCD detector, while the ultraviolet (UV) portion of the spectrum is covered by two photon-counting multianode microchannel array (MAMA) detectors. The anode arrays used in the MAMA detectors contain 1024$`\times `$1024 elements, but photon events are centroided to half the natural spacing of the anode arrays. The detectors are read out as 2048$`\times `$2048 pixel arrays, but for our purposes (and in most STIS documentation, see Kimble et al. 1998) a pixel will be defined as one element in the lower-resolution 1024$`\times `$1024 element format. Each of the two MAMA detectors are used (in their primary modes) to image a separate portion of the UV spectrum. The far-ultraviolet (FUV) MAMA covers the wavelength range $`11501700`$ Å, while the near-ultraviolet (NUV) MAMA is primarily used in the wavelength range $`16503100`$ Å. The two-dimensional format of the STIS detectors offers significant multiplexing advantages over the one-dimensional Digicon detectors used in the two previous-generation spectrographs on HST, the Goddard High Resolution Spectrograph (GHRS) and the Faint Object Spectrograph. High-resolution UV spectroscopic capabilities with STIS are provided by four cross-dispersed echelle gratings, yielding resolutions $`R=\lambda /\mathrm{\Delta }\lambda `$ in the range 30,000–110,000 with the primary apertures (the $`0\stackrel{}{\mathrm{.}}2\times 0\stackrel{}{\mathrm{.}}09`$ or $`0\stackrel{}{\mathrm{.}}2\times 0\stackrel{}{\mathrm{.}}2`$ apertures for the E140H and E230H gratings, and $`0\stackrel{}{\mathrm{.}}2\times 0\stackrel{}{\mathrm{.}}06`$ or $`0\stackrel{}{\mathrm{.}}2\times 0\stackrel{}{\mathrm{.}}2`$ apertures for the E140M and E230M gratings). Higher resolution capabilities (up to $`R220,000`$) are available using the smallest available entrance aperture ($`0\stackrel{}{\mathrm{.}}1\times 0\stackrel{}{\mathrm{.}}03`$) with the E140H and E230H gratings. The two-dimensional format of the MAMA detectors coupled with the $`2.75`$ km s<sup>-1</sup> velocity resolution of the highest-resolution echelle gratings (E140H and E230H) provide spectroscopic capabilities that are in many ways superior to those of the GHRS. The Ech-A and Ech-B gratings used with the $`1\times 512`$ pixel Digicon detectors of the GHRS covered $`715`$ Å per exposure at a resolution $`3.33.5`$ km s<sup>-1</sup>, while STIS provides $`200`$ Å coverage per exposure at slightly higher resolution. Furthermore, the two-dimensional STIS detectors image the on-order spectrum and inter-order background and scattered light simultaneously, whereas observations with the GHRS typically required additional overhead to measure the inter-order background. The background and scattered light properties of the GHRS were well studied in both the pre- and post-flight epochs by Cardelli, Ebetts, & Savage (1990, 1993). An appropriate, simple background subtraction technique was developed by these authors and subsequently applied in the standard GHRS data processing pipeline. The scattered light properties of STIS have been discussed by Landsman & Bowers (1997) and Bowers et al. (1998). The STIS Instrument Definition Team (IDT) has developed a complex algorithm for deriving the on-order background spectrum for the STIS echelle-mode data (Bowers et al. 1998). This algorithm, to be presented by Bowers & Lindler (in prep.), derives the scattered light estimate over the whole MAMA field using an initial estimate of the on-order spectrum, laboratory measurements of the scattering properties of the gratings, and models of several sources of background light (including the detector and image point spread function halos). The final on-order background is estimated after iteratively converging on a model image that best matches the observed MAMA image, and then subtracting the background portion of the model from the original image. The implementation of this background-correction routine into the standard CALSTIS processing pipeline<sup>1</sup><sup>1</sup>1CALSTIS is part of the standard STSDAS distribution available from Space Telescope Science Institute (STScI). More information on CALSTIS and its procedures can be found in the HST Data Handbook (Voit 1997). is currently proceeding. In this paper we present an alternative approach for deriving the on-order background spectrum for high-resolution STIS echelle data taken with the E140H and E230H gratings. This approach is much simpler and less computationally intensive than that derived by the STIS IDT, and can be used with the data products produced by the standard CALSTIS distribution. Our derivation of the on-order background spectrum is highly empirical; we do not employ detailed models of the many sources of scattered and background light in the STIS instrument. This simplicity has its limitations. The effects of scattering by the echelle grating are not explicitly accounted for, and this causes artifacts in our background at short wavelengths. However, we will show that our algorithm is able to correctly estimate the level of scattered light in STIS echelle observations, as judged by the very small residual flux levels in the saturated cores of strong interstellar absorption lines. Furthermore, we will present comparisons of background-corrected STIS data with archival GHRS data that suggest the shape of the derived background spectrum is being estimated correctly. Artifacts caused by the influence of echelle-scattered light in our approach are well understood, and the importance of these difficulties can be sufficiently accounted for in the final error budget. We begin by briefly discussing the STIS echelle-mode spectral format and sources of background and scattered light in STIS echelle-mode data in §2. We detail our algorithm for estimating the on-order background in STIS E140H and E230H data in §3. In §4 we provide examples of background-corrected data and discuss the uncertainties of our approach. Comparisons of STIS data extracted using our algorithm with archival high- and intermediate-resolution GHRS data are given in §5. We summarize our results in §6. Our focus throughout this paper will be on the effects of the background subtraction on the analysis of interstellar absorption lines, since that is primarily why we have developed our routines. However, the approach outlined in this paper should be applicable to most other uses of high-resolution STIS data. The routines described in this work will be made available to the general astronomical community. ## 2 The STIS Echelle Format and Scattering Geometry Figure 1 shows a raw two-dimensional STIS E140H FUV observation of the O9.5 Iab star HD 218915 covering the wavelength range $`11601360`$ Å. Several prominent interstellar absorption lines are labelled, and we have marked a number of spectral orders. The spectral orders at short wavelengths (bottom) are more tightly spaced than those at the long wavelengths (top). The usable spectral orders in this exposure run from 310 to 362 (with central wavelengths from $`1358`$ to 1162 Å, respectively). Some spectral regions occur in multiple orders. For example, two of the three lines in the N I triplet near 1200 Å appear in order 350 while the whole triplet is seen in order 351. Absorption due to C II $`\lambda `$1334 can also be seen in two spectral orders (315 and 316) in this image. Figure 1 shows that the individual spectral orders are tilted with respect to the reference frame defined by the MAMA detector, and close inspection reveals they are slightly curved. In this paper we will be working with a product of the standard CALSTIS processing pipeline: two-dimensional geometrically-rectified images of each order. These flux- and wavelength-calibrated images have been extracted such that they are linear in both the wavelength and spatial (cross-dispersion) directions. The rectified images contain both the on-order spectral region as well as the adjacent inter-order light for each order. Figure 2 schematically illustrates the geometry of the two-dimensional rectified images (top panel). A portion of a STIS comparison lamp observation (taken with the E230H grating) is also shown (bottom panel). We define the $`x`$ and $`y`$ axes to follow the dispersion and cross-dispersion directions, respectively, in these images. The center of a spectral trace is defined to be at the position $`y_0`$, and the on-order light can be approximately described by a narrow Gaussian centered on this point. Though the rectified images we will be using contain only one order, we have shown several in Figure 2 for the sake of illustration. Figure 2 also illustrates several potential forms of background light in the STIS echelle modes, and many of these can be identified in the accompanying comparison lamp image. We will discuss each of these sources of background light below. Sources of scattered and background light in the STIS echelle modes are also discussed briefly by Bowers et al. (1998) and Landsman & Bowers (1997). For comparison we point the reader to Cardelli et al. (1990, 1993) for discussions of the scattering geometry in the GHRS echelle modes, and to Schroeder (1987) for echelle geometry in general. Both the echelle and cross-disperser gratings can give rise to scattered light within the spectrograph. Cross-disperser scattering will cause light from the order to be projected perpendicular to the spectral order, i.e., along the $`y`$ axis, into the inter-order region. Cross-disperser scattering is thought to be a relatively minor contributor in the STIS echelle modes (Bowers et al. 1998; Landsman & Bowers 1997). Echelle scattering will cause light to be dispersed into the inter-order region at small angles to the order along lines of constant wavelength. For example, a bright emission line will appear to be connected to the same wavelength in adjacent orders by echelle scattered light in the inter-order region (see Figure 2). This form of scattering can be an important contributor to the background light in the STIS high-resolution echelle modes. It is difficult to account for without an a priori knowledge of the incoming spectrum. The iterative approach adopted by the STIS IDT should be able to account for this form of scattered light more accurately than our approach outlined in §3. Another important source of background light in the STIS echelle mode data is the detector “halo,” whose width for the MAMA detectors can be of the same magnitude as the separation between orders (Landsman & Bowers 1997). The background spectrum caused by the detector halo will include contributions from the on-order spectrum of interest, $`m`$, and potentially light from the adjacent orders, $`m\pm 1`$, as well. Unlike light scattered by the cross-disperser and echelle gratings, the detector halo creates a background with an (approximately) axisymmetric pattern. Similarly, the halo of the telescope point spread function (PSF) will be an additional source of background light. (This form of background light is not illustrated in Figure 2). This is essentially light from the wings of the PSF that arrives at the gratings with a slightly different angle than the narrow core of the PSF. Thus the halo creates its own, lower resolution version of the primary on-order spectrum that is also more extended in the spatial (cross-dispersion) direction. Other potential sources of stray light in the STIS echelle modes include scattered light in the telescope assembly, large angle dust scattering, grating ghosts, reflections off of detector windows and other instrument components along the light path, and perhaps others. There is also a component of the background caused by particle radiation and phosphorescent emission from the detector window, which is particularly important for the NUV MAMA observations (Kimble et al. 1998). ## 3 Deriving the On-Order Background Spectrum The background in STIS echelle observations is quite complex, with several sources of scattered and background light. Unfortunately, little information on the scattering properties of the instrument is available in the astronomical literature or through the various STIS instrument teams. We point the reader to the short discussion of the background and scattered light properties of the instrument by Bowers et al. (1998) and Landsman & Bowers (1997) for the best summaries provided by the STIS IDT. The level of scattered light can vary significantly within an order, as judged by the residual flux levels in the saturated cores of strong interstellar absorption lines in gross (non-background corrected) spectra. Thus a common flux offset to the data at all wavelengths is insufficient as a background estimate. The appropriate zero levels of such saturated lines can vary by $`10\%`$ of the local gross continuum within a given order. Furthermore, the background level also varies significantly from order to order, making an average global correction inappropriate. Given the complexities of the origins of the scattered and background light (and the instrument in general), we have developed an algorithm for estimating the on-order background spectrum that does not explicitly model the sources of background light. Instead, we empirically estimate the on-order background spectrum given the structure in the inter-order regions of the MAMA detector for each individual spectral order. Similar empirically-derived approaches have been successfully applied for various other instruments using echelle gratings with two-dimensional detectors (e.g., Churchill & Allen 1995; Jenkins et al. 1996). Our method is different than those developed by Churchill & Allen (1995) for application to the Hamilton Echelle Spectrograph and by Jenkins et al. (1996) for the IMAPS instrument. In those cases the contribution to the on-order gross spectrum from the adjacent orders is derived using the known cross-dispersion profile of the closely-spaced spectral orders. The effects of order overlap are not as significant in the STIS echelle-mode observations as in either of these instruments. The background in our case is dominated by the broad halo surrounding an order, with much smaller contributions from the halos of the adjacent orders; echelle scattering can also be important (see §2). ### 3.1 The Method In this section we present the procedure we have developed for deriving the on-order background for STIS E140H and E230H observations. Our approach is to fit the cross-dispersion profile of the observed inter-order light in a manner that correctly estimates the background at each position in the on-order spectrum. The steps used in our background removal procedure are as follows. (1) Produce two-dimensional rectified gross and error images of each order using CALSTIS. (2) Compress the image for a given order along the dispersion direction, thereby producing an average cross-dispersion profile. (3) Fit and subtract a rough background to the average cross-dispersion profile derived in step 2. (4) Identify the center of the spectral trace in the background-subtracted average cross-dispersion profile. (5) Identify the inter-order background regions in the gross image as those having less than 3.5% of the peak on-order flux. (6) Smooth the inter-order background with a two-dimensional Gaussian kernel. (7) Fit the cross-dispersion inter-order background profile at each wavelength with a seventh-order polynomial, weighting points nearer the spectral trace more heavily than more distant points. This step produces a two-dimensional background image after the background has been fit for each wavelength (e.g., Figure 3). (8) Extract one-dimensional gross, error, and background spectra from their respective two-dimensional images by summing over a seven-pixel extraction box centered on the spectral trace (summing in quadrature in the case of the error spectrum). (9) Smooth the background spectrum with a 17-pixel Lee (1986) local statistics filter. (10) Subtract the smoothed background spectrum from the gross spectrum to produce the final, net spectrum. The process presented above is relatively simple. However, each of the steps has been optimized to give the best background fit, and therefore each requires a slightly more thorough discussion. We present the details of each of the steps of our technique below. (1) For each order in the observation, we produce the geometrically-rectified two-dimensional gross and error images, which we denote $`g_{i,j}`$ and $`e_{i,j}`$, respectively, using the standard CALSTIS software available at the Space Telescope Science Institute. For our purposes, $`i`$ and $`j`$ refer to the dispersion and cross-dispersion direction, respectively. In this case, increasing values of $`i`$ imply increasing wavelengths, $`\lambda `$. These images have been flux and wavelength calibrated, though our method works just as well for count rate images. (2) We compress the gross image of a given order along the dispersion direction, creating an order-averaged cross-dispersion profile, $`a_j`$: $$a_j\underset{i}{}g_{i,j}.$$ (1) We use this average profile to determine the $`y`$-position of the center of the spectral trace, $`y_0`$, and to define the background regions of the rectified gross image. (3) The array $`a_j`$ contains contributions from both the on-order spectrum and the on-order and inter-order backgrounds. To approximate the average background pedestal in the collapsed spectrum, we fit a fourth order polynomial to the cross-dispersion averaged profile, excluding the inner 40% of the array. The rectified images are slightly smaller than twice the separation between the order of interest and its neighbors in the raw MAMA image. (The inter-order separation in the raw MAMA image scales linearly with the central wavelength of the order.) For short wavelength observations, the fitting process described above excludes the central 12 pixels (the FWHM of the spectral trace is $`23`$ pixels). This rough fit is subtracted from $`a_j`$, producing a background-subtracted order-averaged cross-dispersion profile, $`a_j^0`$. (4) We identify the $`y`$-position of the center of the spectral trace, $`y_0`$, with the maximum of $`a_j^0`$. Although a Gaussian fit may be more appropriate in theory, in practice we find identifying the center of the spectral trace by its maximum is sufficient. (5) After identifying the center of the trace, we define the background regions, $`y_b`$, of the two-dimensional gross image to be those points where $`a_j^0<0.035\mathrm{max}(a_j^0)`$. Those points that meet this criterion will be used in fitting the cross-dispersion profile of the inter-order background light. We have experimented with many different cut-off criteria for defining the background regions in the two-dimensional rectified images. The resulting background estimate can be somewhat sensitive to this parameter. Our experimentation suggests a cut-off at $`3.5\%`$ of the peak collapsed spectrum intensity is appropriate in most cases. The derived backgrounds are similar if one uses cut-off levels in the range $`2\%5\%`$. The $`y`$-coordinates of the background regions are the same for each wavelength within an order. (6) After determining which regions of the image $`g_{i,j}`$ make up the off-order background, we smooth those regions with a two-dimensional Gaussian having FWHM of 1.0 and 2.5 pixels in dispersion and cross-dispersion directions, respectively. This reduces some of the statistical noise, making the fitting process less susceptible to random noise fluctuations. (7) Next we produce a background image, $`b_{i,j}`$, of the same dimension as the gross and error images by fitting a one-dimensional seventh-order polynomial to the cross-dispersion background profile at each point $`i`$ (each wavelength $`\lambda `$). Without the smoothing of the previous step, the background image produced, $`b_{i,j}`$, contains an excessive amount of small-scale structure. The importance of each cross-dispersion point, $`y_j`$, in the fitting process is weighted by $`|y_jy_0|^{1/2}`$, so that regions closest to the on-order spectrum are weighted more strongly than points at large distances from the spectral trace. Several different weighting schemes were tested, and the results were relatively insensitive to the weighting scheme for those weighting points toward the center more heavily. More uniform weighting schemes generally do not work as well. Only those points $`y_j`$ contained within $`y_b`$ are used in the fitting process. We have found that fits using polynomials of higher order than seventh often give spurious results (particularly given the small number of background points used in the fits at short wavelengths). Lower-order fits often fail to adequately describe the curvature in the inter-order light distribution as well as the adopted seventh-order fits. (8) After creating a background image, we extract one-dimensional gross, background, and error spectra using a seven-pixel extraction box. For the gross and background images, the seven points centered on the spectral trace are summed to produce the on-order gross spectrum ($`g_\lambda `$) and initial background spectrum ($`b_\lambda ^{}`$): $$g_\lambda g_i=\underset{j=y_03}{\overset{y_0+3}{}}g_{i,j},$$ (2) and $$b_\lambda ^{}b_i=\underset{j=y_03}{\overset{y_0+3}{}}b_{i,j}.$$ (3) The error spectrum, $`e_\lambda `$, is produced by adding the seven central pixels in quadrature: $$e_\lambda e_i=\left(\underset{j=y_03}{\overset{y_0+3}{}}e_{i,j}^2\right)^{1/2}.$$ (4) We have chosen to adopt a seven-pixel summation extraction box to match the standard CALSTIS routines, though other extraction box sizes can be adopted. An optimal extraction routine may also be used. (9) The final background spectrum, $`b_\lambda `$, is produced by smoothing the extracted background spectrum $`b_\lambda ^{}`$ with a 17-pixel Lee (1986) local statistics filter. The Lee filter effectively removes high-frequency noise in flat regions of the spectrum while leaving high-contrast edges in the spectrum intact. This is particularly important for spectra containing narrow interstellar absorption lines. (10) The final net spectrum, $`n_\lambda `$, is the difference between the gross and smoothed background spectra, $`n_\lambda g_\lambda b_\lambda `$. ### 3.2 Illustration of the Method Figure 3 shows a portion of the raw, two-dimensional rectified gross image ($`g_{i,j}`$) of order 334 taken from the E140H observation shown in Figure 1. Also shown is the corresponding two-dimensional background image ($`b_{i,j}`$) derived as described above, and the difference of these images ($`n_{i,j}g_{i,j}b_{i,j}`$). All of the images shown in Figure 3 are displayed with the same intensity scale, which is truncated at a low level to show the structure of the inter-order background. The wavelength coverage of this section of the spectrum is $`\lambda 1258.6`$ to $`1261.4`$ Å; this region includes absorption from S II $`\lambda `$1259 and Si II $`\lambda `$1260 (as well as weaker absorption from Fe II $`\lambda `$1260 and C I $`\lambda `$1261). The final extracted net spectrum ($`n_\lambda `$) and background ($`b_\lambda `$) are shown in Figure 4. Several aspects of Figure 3 are noteworthy. First, the echelle scattered images of the saturated interstellar absorption lines are vaguely visible in the inter-order regions of the $`g_{i,j}`$ image. These can also be seen to some extent in the model background image, $`b_{i,j}`$. The background image contains the imprint of echelle-scattered light since the fitting process does not specifically exclude such regions. Second, the background image, $`b_{i,j}`$, shows significant high-frequency variations in the dispersion direction. This is a result of fitting each column (wavelength) independently. The application of the Lee filter to the extracted one-dimensional background spectrum, $`b_\lambda ^{}`$, removes much of this high-frequency noise from the final background spectrum, $`b_\lambda `$ (see §3.1). The inter-order regions of the background-subtracted image, $`n_{i,j}`$, are relatively smooth and very near zero residual intensity. There is little residual large-scale structure in the background regions of the $`n_{i,j}`$ image. Figure 5 shows several examples of cross-dispersion profiles (histogram) extracted from the rectified two-dimensional image of this order. The cross-dispersion profiles are numbered, and the positions of these profiles in wavelength space are marked on the image and spectrum shown in Figure 4. These profiles are drawn from several different regions of the spectrum, including continuum regions (position 1) as well as regions that include weak (positions 5 and 6) and strong (positions 2 through 4) absorption. The smooth curves plotted on top of the cross-dispersion profiles show our one-dimensional polynomial fits to the inter-order background, and the squares mark the data points used for deriving those fits. One can see that the inter-order backgrounds are well fit at most of these positions. A counter-example is position 3, which is marked with a star in the spectrum shown in Figure 4. In this case the background is over-estimated. The difficulty in fitting this position is caused by the effects of echelle scattering; we will discuss this in more detail in §4. To show how the background we are fitting in these cross-dispersion profiles relates to the larger-scale distribution of light in the two-dimensional MAMA images, we present a cross-dispersion profile through several orders in Figure 6. The distribution shown here is drawn from the raw (gross) MAMA observation of HD 218915. The identifications of the orders are given along the top of the figure. This particular cross-dispersion cut passes through the saturated troughs of two strong interstellar lines: N I $`\lambda `$1200.2 in order 351 and Si II $`\lambda `$1193.3 in order 353. Figure 6 shows a background fit for order 351 as the solid line. The points used in this background fit are marked with filled squares, as in Figure 5. One can see that while the orders are closely spaced, the excellent image quality of HST concentrates the on-order light into the central few pixels. Thus it is only the wings of the halos from adjacent orders that overlap and not the cores of the on-order light distribution. This can be compared with the cross-dispersion profiles shown in Churchill & Allen (1995) and Jenkins et al. (1996), where the overlap of light from adjacent orders is much more significant. ### 3.3 Potential Limitations The procedure we have outlined above is quite simple and can be summarized as a single step: we fit the cross-dispersion profile of the inter-order light to estimate the on-order background spectrum. It seems established, however, that the cross-disperser gratings provide little to the scattered light budget of the STIS echelle modes (Bowers et al. 1998; Landsman & Bowers 1997). The more important causes of scattered and background light in the echelle modes are the PSF halo, the detector halo, echelle grating scattered light, and the particle/phosphorescent background. Neither the halos nor the echelle scattering give a pure cross-dispersion profile of scattered light. It is important to consider the effects this may have on the approach outlined in §3.1, given that we are only fitting the cross-dispersion profile of the background light. The PSF and detector halos provide (roughly) axisymmetric scattering about a given point in the spectral trace (see lamp spectrum in Figure 2). Thus, this type of scattering distributes light both in the dispersion and cross-dispersion directions. The inter-order light at a given point contains halo-scattered light that originates from several different wavelengths. The weighting of data points close to the spectral trace in our inter-order background fits causes light scattered at small angles from the dispersion direction by the PSF and detector halos to play an important role in the resulting estimate of the on-order background spectrum. The broad halos surrounding an order seem to be the dominant source of on-order background. For continuum sources the effects of the halos of orders $`m\pm 1`$ are significantly less than the halo of the order $`m`$. To quantify this we have derived a composite cross-dispersion profile showing the distribution of halo light as a function of distance from an order. Figure 7 shows composite cross-dispersion profiles derived for two wavelength regions. The top plot shows a region near 1216 Å derived from E140H observations of strong Ly$`\alpha `$ emission the solar analog $`\alpha `$ Centauri A (kindly made available to us by J. Valenti). The bottom plot shows a region near 2800 Å derived from Mg II emission from $`\alpha `$ Orionis. The intensities have been normalized to unity at the center of each order. The points are averages of emission from two orders (346 and 347 for the E140H data and 276 and 277 for the E230H data) drawn from the brightest emission regions in these spectra. We have averaged only one side of each order to exclude the brightest echelle-scattered light. The filled circles have intensities $`3.5\%`$ of the peak and would hence be used in constructing the background using our procedure (see §3.1). The bump of excess emission centered at $`|\mathrm{\Delta }y|=18`$ (marked as $`m\pm 1`$) in the E140H plot is caused by weak emission from the adjacent orders. The adjacent orders in the E230H plot are located at $`|\mathrm{\Delta }y|=37`$, i.e., beyond the edge of the plot. At the position of the adjacent order for the E140H data ($`|\mathrm{\Delta }y|=18`$ in Figure 7), the expected relative intensity of the broad halo from the central order is only a few percent of its peak (extrapolating the halo contribution marked with filled circles to $`|\mathrm{\Delta }y|=0`$). The contribution from the halos of adjacent orders to the order being extracted is therefore expected to be only a few tenths of a percent of the peak on-order flux. Similar limits can be derived from the lack of strong spectral features from adjacent orders $`m\pm 1`$ in the gross spectrum of an order $`m`$. The contribution of the halos from orders $`m\pm 2`$ to an order $`m`$ should be less than 0.1% of the contribution of its own halo. For the E230H distribution, the halos have a flatter decline with distance from the center of the order. However, the overall level of the halo is significantly lower relative to the peak intensity in the center of the order compared with the E140H distribution. This is consistent with the much lower level of the backgrounds seen at 2800 Å than at Ly$`\alpha `$ (see §4.3). For continuum regions the halo contribution from the current order is much more important than that from the adjacent orders. This is in part why we have not adopted a method similar to that of Churchill & Allen (1995) or Jenkins et al. (1996). In cases where strong emission lines are present in an adjacent order, it is true that the contribution from the halo of an adjacent order $`m\pm 1`$ can be more important than the local halo if the emission from the local order $`m`$ is very weak. Light scattered by the echelle gratings is the single most important limiting factor in our approach. In theory, if the spectrum scattered into the inter-order region is smooth over relatively large scales, echelle-scattered light could be accurately fit by our technique. In that case, the inter-order light is a smoothly varying function in the cross-dispersion direction, i.e., there is a degeneracy between cross-disperser and echelle scattering in the inter-order regions for a smooth spectral source. However, there are small-scale structures in the scattered spectrum (e.g., interstellar absorption features, particularly the strongly saturated lines). In the case where strong or numerous small-scale spectral features are scattered by the echelle gratings into the inter-order regions, the validity of our approach may break down. Echelle scattering of local interstellar features into the inter-order regions can bias the cross-dispersion fits (see §4). This will be particularly true if the scattering by the echelle gratings makes up a very large fraction of the scattered light budget. In general, we believe the relative echelle scattering contribution to the total on-order background spectrum is less important than the detector halo (in terms of the relative intensities). Using the E230H lamp observation shown in Figure 2, we estimate that the echelle scattering contribution to the interorder background at $`\lambda 2450`$ Å is $`<5\%`$ of the contribution from the halo seen surrounding strong emission lines. Using the aforementioned E140H observations of Ly$`\alpha `$ emission from $`\alpha `$ Cen, we have found that the peak intensity of the echelle-scattered emission is equivalent to the strength of the halo shown in Figure 7 at $`|\mathrm{\Delta }y|10`$ pixels (i.e., $`10\%`$ of the expected halo intensity near the center of the order). However, the impact of echelle scattering on our method is more important than its intensity relative to the broad detector halo would suggest. Discrete features in the echelle-scattered inter-order light adversely effect the cross-dispersion fits in some cases. Thus the results for regions near very strong, sharp absorption (or emission) lines will need to be viewed with caution, particularly at short wavelengths where the inter-order separation is smaller. This is discussed more fully in §4. ## 4 Results ### 4.1 Examples and Uncertainties In this section we show several examples of STIS high-resolution echelle data extracted as discussed in §3. The wavelength regions shown contain strongly-saturated interstellar lines. These flat-bottomed lines, which should have zero residual flux, offer a chance to assess the quality of the background subtraction through their shape and flux level. A detailed comparison of STIS data extracted with our algorithm and archival GHRS data will be given in §5. We draw examples in this section both from archival STIS data and our own guest observer program (STScI program #7270). Figures 8, 9, and 10 show examples of short wavelength ($`\lambda <1350`$ Å) E140H spectra for the stars HD 218915, HD 185418, and HD 303308, respectively. The four spectral regions shown in these figures contain saturated absorption lines near 1200 Å (the N I triplet), 1260 Å (S II $`\lambda `$1259 and Si II $`\lambda `$1260), 1302 Å (O I $`\lambda `$1302 and Si II $`\lambda `$1304), and 1334 Å (C II $`\lambda `$1334 and C II $`\lambda `$1335). The solid lines show the background-corrected on-order spectrum, $`n_\lambda `$, while the dashed lines show the on-order background spectrum, $`b_\lambda `$, that was subtracted from the extracted gross spectrum. For the region near C II $`\lambda `$1334, we have co-added two spectral orders (315 and 316) weighted by their respective error spectra. Figure 11 shows several longer-wavelength spectra of HD 303308, including three spectral regions observed with the E230H grating. This figure shows four spectral regions that include the saturated absorption lines Al II $`\lambda `$1670, Fe II $`\lambda `$2382 and $`\lambda `$2600, and Mg II $`\lambda `$2796. As in the previous figures, the on-order background spectrum is shown as the dashed line. The representative spectra shown in Figures 811 exhibit several common features of our data extraction routine. First, we find that the ratio of the on-order background spectrum to the on-order net (or gross) spectrum is higher in the shorter wavelength observations. This is consistent with expectation, particularly given the smaller order spacing at shorter wavelengths. The ratio of on-order background to gross spectrum will be discussed in more detail in §4.3. Next, it is clear from Figures 8 through 11 that the quality of the background subtraction is much better at longer than at shorter wavelengths. For all the stars shown in these figures, the absorption due to C II $`\lambda `$1334 is flat-bottomed at zero flux level. Towards progressively shorter wavelengths, the saturated absorption profiles tend to show excursions from true zero with slight artifacts appearing in the black troughs. Figure 12 shows expanded views of regions of the HD 218915 spectra containing strongly-saturated interstellar absorption. The profile of Si II $`\lambda `$1260 in Figure 12 shows a slight over-subtraction of the background on the lower-wavelength end of the otherwise black absorption trough. The N I $`\lambda `$1199 profile seen in Figure 12 shows over-subtracted regions at various points along the absorption trough. The background in this case shows significant spurious structure (e.g., near $`\lambda =1199.35`$ Å). Over-subtraction of the background is seen for all three members of the N I triplet near 1200 Å (see Figure 8). However, C II $`\lambda `$1334 shows only a very small over-subtraction of the background. The long wavelength observations of HD 303308 shown in Figure 11 show no such artifacts, and the saturated absorption profiles are at the correct zero level over their entire breadth. This characteristic over-subtraction of the background in short wavelength observations seems to be caused by the influence of echelle scattering of strong lines into the inter-order background on our cross-dispersion polynomial fits (see 3.3). Figure 5 shows several examples of cross-dispersion profiles extracted from the rectified two-dimensional image of order 334 in our E140H observations of HD 218915, as discussed in §3.2. Most of the cross-dispersion profiles shown in Figure 5 are examples of regions where our fitting procedure works well. Position 3, marked with a star in the spectrum shown in Figure 4, shows a wavelength region where our background fit overestimates the true background level. In this case, the saturated line core is over-subtracted by $`1\%`$ of the local continuum. The cross-dispersion profile for position 3 contains two regions, marked with open squares in Figure 5, with lower intensity than their surroundings (the order is centered at $`y=15`$). These discrete regions on either side of the spectrum are images of strong interstellar absorption from the on-order spectrum that have been scattered into the inter-order region by the echelle grating. The dashed line over-plotted on the cross-dispersion profile of position 3 shows a polynomial derived by excluding these four points from the fit. The fit shown by the dashed line is significantly better than that used in deriving the spectrum shown near the bottom of the figure. However, we do not at present have an algorithm to identify and exclude such points from the cross-dispersion fits. If the spatial distribution of the echelle-scattered background were derived, and the effects of this scattering removed from the two-dimensional images before the cross-dispersion fitting described here, our approach would likely have fewer problems at short wavelengths. The method developed by Churchill & Allen (1995) for use with the Hamilton Echelle Spectrograph does in effect build a model two-dimensional background surface which is subtracted from the raw data before standard spectra-extraction techniques are applied. The derivation of the echelle scattering properties is, however, beyond the scope of this work. Our emphasis in presenting (and testing) our background subtraction routines has been on observations of early-type stars, since these are the most important for studying the interstellar medium. However, our reduction techniques should also be suitable for most other types of observations. To demonstrate the applicability of our approach to other types of observations, we have reduced archival STIS E230H observations of $`\alpha `$ Orionis (Betelgeuse). The region surrounding the Mg II $`\lambda `$2800 doublet is shown in Figure 13. The saturated interstellar absorption lines seen on top of the stellar emission lines are flat bottomed at zero flux, as expected. We have also tested our routine on the aforementioned FUV STIS observations of the solar analog $`\alpha `$ Cen. In general our routines did an excellent job deriving the background for this emission line dataset. However, in the orders adjacent to the extremely bright Ly$`\alpha `$ emission from this star, the wings of the detector halo extended well into the very weak emission from the adjacent orders (see Figure 7). This caused our routines to over-subtract the very bright background from the very weak continuum and line emission in the orders adjacent to Ly$`\alpha `$. This is the one instance we have identified where our procedures are not able to appropriately fit the interorder background. Our tests suggest that, with this one exception, our extraction procedures are doing reasonably well even for late-type stars. ### 4.2 Caveats Given the difficulties in fitting the cross-dispersion profiles at short wavelengths, where the halos of the closely-packed spectral orders may overlap significantly and the echelle scattering affects a greater fraction of inter-order pixels, we believe particular attention should be paid to background uncertainties. Lines having residual fluxes less than $`1020\%`$ of the local continuum flux should be treated with care. Astronomers working with such lines in high-resolution STIS data should consider the effects of zero-point uncertainties that depend on wavelength in deriving column densities and other parameters. The fluctuations in our background estimates can be used to approximate the uncertainties in the overall zero level for individual spectral regions. For example, the standard deviation about the mean of the background spectrum in the spectral region encompassing the saturated Si II 1260 Å line is $`0.6\%`$ of the local continuum. The $`23\sigma `$ fluctuations are a good estimate of the potential error in the background spectrum in this case (as judged by the maximum oversubtraction of the Si II line). For lines with residual fluxes that are 10% of the local continuum, this implies potential systematic errors in the optical depth of order 10%. Although the background subtraction at short wavelengths, e.g., $`\lambda 1300`$ Å, often shows evidence of slight artifacts in the saturated absorption troughs of deep lines, we find little evidence for such problems in the longer wavelength observations. This is due to the larger order separation, generally higher signal-to-noise ratios, and smaller contribution of the on-order background to the gross spectrum. Data taken with the E140H grating at wavelengths in excess of $`\lambda 1600`$ Å and with the E230H grating at wavelengths $`\lambda 2200`$ Å (where $`b_\lambda /g_\lambda 0.1`$; see §4.3) are likely more secure than shorter wavelength data in each of the gratings. And while we correctly state that these artifacts are present for wavelengths shorter than 1300 Å, the difficulties are most apparent wavelengths shortward of Ly$`\alpha `$. However, analysis of detailed line shapes for strong absorption should still be approached with caution for $`\lambda 1300`$. In general astronomers using a routine like the one outlined here may decide to apply a second-order correction that adjusts the output spectrum by a simple vertical flux offset. Such a shift is designed to bring the majority of a saturated line profile to the correct zero level in a given order. We have calculated the shift required to make the average of the saturated line profiles of the short wavelength lines observed towards HD 303308 equal zero flux. We find an offset of $`12\%`$ of the local continuum is most appropriate. Although we are skeptical that this is the best way to correct spectra where the saturated lines fall somewhat below zero, the correction is typically modest. The quality of the background estimation using the algorithm presented here is dependent on the signal-to-noise ratio of the dataset. In low signal-to-noise ratio data, the polynomial fitting routine used in our method can produce spurious background excursions. Thus, if one wants to accurately remove the on-order background using our procedures, it is important to design observing programs that produce enough inter-order light to be accurately fit. For practical purposes, this means producing datasets with signal-to-noise ratios in excess of 20 is important for accurate background determinations. ### 4.3 The On-Order Background Fraction We have calculated the fractional contribution of the on-order background light to the on-order gross spectrum as a function of wavelength (spectral order) for a number of archival STIS datasets as well as our own guest observer program data. Figure 14 shows this fraction for the E140H grating. Each data point represents the median value of the extracted gross spectrum divided by the derived on-order background for a given spectral order within a single observation, i.e., the median of $`b_\lambda /g_\lambda `$. In making this plot, we have extracted the data in counts rather than flux. This removes any differences between the datasets caused by discrepant flux calibrations. Average and median values of the background to gross ratio are collected in Table 1 for several spectral orders. In Figure 15 and Table 2 we collect analogous values for data taken with the E230H grating. All of the stars included in these determinations are early type stars (O and B type, as well as one white dwarf). The ratio $`b_\lambda /g_\lambda `$ is likely to depend on the shape of the underlying stellar continuum. This is obvious even in the current data given the large dispersion that appears near strong P Cygni stellar wind lines (e.g., Si IV $`\lambda `$1400 and C IV $`\lambda `$1550 in Figure 14). Figures 14 and 15 show a general decrease in the prevalence of the background light at longer wavelengths. This is in qualitative agreement with expectation, since most scattering phenomena show a decreasing intensity with increasing wavelength. For example, diffracted intensity decreases as $`\lambda ^2`$, while Rayleigh scattering intensity decreases as $`\lambda ^4`$. Furthermore, the order spacing increases with wavelength, making the fitting process more robust. We find that the background to gross fraction, $`b_\lambda /g_\lambda `$, decreases approximately as $`\lambda ^3`$ for both gratings. This scaling is relationship is only approximate, and the E140H data are not well fit by $`b_\lambda /g_\lambda \lambda ^3`$ over the entire wavelength range tested. There is an excess in the $`b_\lambda /g_\lambda `$ at short wavelengths compared with this scaling, and the effects of strong stellar wind and interstellar absorption features are visible in many other wavelength regions (see Figure 14). Furthermore, the ratio $`b_\lambda /g_\lambda `$ for E230H data may rise beyond 3000 Å, though our data in that region are sparse. The inter-order separation, $`\mathrm{\Delta }y_m`$, in STIS high resolution echelle-mode observations increases linearly with wavelength. Hence the background to gross fraction, $`b_\lambda /g_\lambda `$ can be described as cubic polynomial of $`1/\mathrm{\Delta }y_m`$. Again, this scaling is not perfect for E140H data. Churchill & Allen (1995) have shown a similar correlation between the order separation and (in their case) inter-order intensity, and they have used it to argue that the contamination of adjacent orders $`m\pm 1`$ (and beyond) was the most important determiner of the on-order background spectrum in data from the Hamilton Echelle Spectrograph. However, we do not believe that order overlap effects are causing the relationship seen in Figures 14 and 15. The lack of direct order overlap and the relatively small contribution from the halos of adjacent orders (see Figure 7) in the STIS data strongly suggests neither of these is driving the relationship seen in Figures 14 and 15. Potentially more likely is the effect of echelle scattering for small order separations, but our discussion in §3.3 suggests this contribution may not be enough to cause these relationships either. It is more likely that Figures 14 and 15 are tracing the wavelength dependence of the processes causing the scattered and background light. The background to gross fractions shown in Figures 14 and 15 are dependent on the size of the extraction box used in determining the gross and background spectra. The cross-dispersion profile of the spectral trace is roughly Gaussian (see Figure 5). Our default extraction procedure adopts a seven pixel extraction box (from $`y_03`$ to $`y_0+3`$, where $`y_0`$ is the center of the spectral trace). The outer-most pixels in the extraction box have much higher background to gross values than the inner-most pixels. Thus, if only the central three pixels were used in the spectral extraction (e.g., $`y_01`$ to $`y_0+1`$), the on-order background to gross ratio would be significantly lower. Experiments using a three pixel extraction box for data near 1250 Å suggests the decrease in $`b_\lambda /g_\lambda `$ can be of order 30%. Given the relatively large contribution of the on-order background light to the gross spectrum at short wavelengths, it may be better in some cases to use smaller extraction boxes. The size of the extraction box can be chosen to minimize the impact of statistical noise in the background on the extracted net spectrum. If the shortest wavelength region is vital to a STIS program, the background fraction must be accounted for when calculating exposure times. ## 5 Comparisons with GHRS Data In order to assess the accuracy of our STIS background correction scheme, we will compare STIS data extracted as described above with data from the previous high-resolution UV spectrograph on-board HST, the Goddard High Resolution Spectrograph (GHRS). This section presents two such comparison datasets. First, we compare STIS E140H observations of HD 218915 with observations of the same star made with the first-order G160M grating of the GHRS. The scattered light properties of the intermediate resolution G160M grating are excellent (see Cardelli et al. 1993), making data obtained with this grating a good point of comparison for our background correction. Second, we compare archival STIS E140H observations of the nearby DA white dwarf G191-B2B with GHRS Ech-A high-resolution data for the same star. The resolutions of the STIS and GHRS G191-B2B datasets are similar, allowing a good comparison without the need to smooth the STIS data. ### 5.1 Comparison of STIS E140H and GHRS G160M Observations of HD 218915 The star HD 218915 lies behind gas associated with the Perseus arm; it was observed extensively with the GHRS first-order G160M grating. We have retrieved these data from the HST archive and reduced them as described by Howk, Savage, & Fabian (1999). These data, taken through the small science aperture ($`0\stackrel{}{\mathrm{.}}25\times 0\stackrel{}{\mathrm{.}}25`$) before the installation of COSTAR, have a velocity resolution of $`18.6`$ km s<sup>-1</sup> (FWHM) at 1250 Å. Our comparison of the STIS and GHRS data for this sightline will focus on the S II lines $`\lambda 1250`$, 1253, and 1259 and Si II $`\lambda 1260`$; strong C I absorption is also seen in this spectral region (see Figure 8). The STIS E140H data, which have a resolution $`\mathrm{\Delta }v2.75`$ km s<sup>-1</sup>, must be smoothed for a direct comparison with the intermediate-resolution GHRS G160M data. We have convolved the higher-resolution STIS data, after combining several spectral orders, with a Gaussian having a FWHM of 18.4 km s<sup>-1</sup> and rebinned these smoothed STIS data to approximately the same sampling as the GHRS data. We have also normalized the STIS data with a linear continuum to bring the slope and continuum intensity levels of the GHRS and STIS data into agreement. After the standard background subtraction, we find no evidence for residual scattered light in the GHRS G160M observations, as judged by the saturated core of Si II $`\lambda 1260`$. Figure 16 shows the comparison between the GHRS G160M data and the smoothed STIS E140H data extracted with our background correction scheme. The top panel shows absorption due to S II $`\lambda 1250`$, the middle panel shows S II $`\lambda 1254`$, and the bottom panel includes absorption from S II $`\lambda 1259`$, Si II $`\lambda 1260`$ (blended with Fe II), and several lines of C I. Unsmoothed data for the spectral region covered in the bottom panel are shown in Figure 8. We have removed a slight velocity offset between the two datasets in producing Figure 16. The agreement between the two datasets shown in Figure 16 is generally excellent. The depths and shapes of all three S II lines are in good agreement between the STIS and GHRS datasets. The STIS Si II $`\lambda `$1260 profile exhibits the slight oversubtraction of the saturated line core that we have pointed out previously (on the low wavelength end). However, the rest of the Si II profile and the adjoining C I transitions show remarkable agreement in the two datasets. We find a similar agreement between the longer wavelength transitions of Si II $`\lambda `$1526 and C IV $`\lambda `$1548, though we do not show those data. Given the good scattered light properties of the first-order GHRS G160M grating, the agreement between these two datasets suggests our background subtraction algorithm is producing reliable estimates of the on-order background spectrum, within the limits of this comparison. The heavy smoothing required to compare the STIS and GHRS datasets could potentially mask important differences in the datasets. ### 5.2 Comparison of STIS E140H and GHRS Ech-A Observations of G191-B2B The nearby ($`d69`$ pc) DA white dwarf G191-B2B has been well-observed by both the STIS and GHRS in their high-resolution modes. The low column density of material along this sightline \[$`\mathrm{log}N(\text{H I})18`$ in atoms cm<sup>-2</sup>\] has made it important for studying the D/H abundance in the local ISM. Results derived from the GHRS observations have been published by Vidal-Madjar et al. (1998). The STIS data have been published by Sahu et al. (1999), who used the STIS IDT background correction algorithm (Bowers & Lindler, in prep.) in their data reduction. We retrieved the archival GHRS Ech-A data for G191-B2B and reduced them as described by Howk et al. (1999). There are a large number of GHRS exposures covering the region of the spectrum containing interstellar Ly$`\alpha `$. We summed all those exposures taken at the same grating carrousel position within an individual observation (visit). We then merged the resulting vectors using the standard wavelength scales, at the same time solving for and removing the fixed-pattern noise spectrum. The standard GHRS background subtraction over-estimates the background level near the saturated interstellar Ly$`\alpha `$ profile. We adopted a value for the “$`d`$-coefficient” of $`d=0.002`$ to bring this saturate profile to the appropriate zero-level (see Cardelli et al. 1990, 1993).<sup>2</sup><sup>2</sup>2That this $`d`$-coefficient is lower than that usually predicted near Ly$`\alpha `$ is not unexpected given the much higher continuum levels of G191-B2B in this region of the spectrum compared with stars behind larger interstellar column densities. All of the other data presented here have been reduced using the standard $`d`$ values of Cardelli et al. (1993). The G191-B2B GHRS data presented here were all taken through the small science aperture. The Si III profile lies at the edge of the Digicon detector array in the GHRS observations of the 1206 Å region. We have used only the three (of four) FP-SPLIT positions where the Si III absorption was shifted away from the edge of the detector array in deriving the profile seen in the bottom panel of Figure 18. Figure 17 shows a comparison of our extraction of the STIS E140H observations of the wavelength region surrounding Ly$`\alpha `$ with the GHRS Ech-A observations of this line. We have co-added two overlapping orders to produce the STIS profiles (orders 346 and 347). The GHRS data have been scaled by a multiplicative constant to match the STIS data. The velocity resolution of the GHRS Ech-A data ($`3.5`$ km s<sup>-1</sup>) is somewhat worse than the resolution of the STIS E140H data; however, the GHRS data have finer sampling ($`0.88`$ km s<sup>-1</sup> vs. $`1.25`$ km s<sup>-1</sup> for the STIS E140H data). Figure 3 of Sahu et al. (1999) compares the STIS Ly$`\alpha `$ profile as extracted with the IDT reduction routines with their reduction of the GHRS Ech-A data (though they have applied a three-pixel smoothing kernel to their data). Again, the agreement between the GHRS and STIS profiles shown here is excellent. The curves are virtually indistinguishable, save for the geocoronal Ly$`\alpha `$ emission in the center of the GHRS profile. The center of the STIS Ly$`\alpha `$ profile shows a slight over-subtraction, consistent with the discussion in §4. The D I Ly$`\alpha `$ profiles are in excellent agreement. There is no compelling reason to believe that the STIS and GHRS data along this sightline are in disagreement as suggested by Sahu et al. (1999). Our GHRS Ly$`\alpha `$ profile is in good agreement with that of Vidal-Madjar et al. (1998). Our profile is in agreement with that derived by the IDT reduction software to within the statistical and background uncertainties. Figure 18 shows a comparison of the normalized profiles (as a function of velocity) for several interstellar absorption lines extracted from the STIS and GHRS observations of G191-B2B. We have normalized the data using low-order polynomial fits (see Howk et al. 1999). The GHRS Si III profile is taken from the very end of the order, and the continuum is uncertain for this profile. The O I (and possibly also N I) absorption includes a telluric contribution. The expected velocities of the telluric absorption in the STIS and GHRS data are marked. The atmospheric O I is cleanly separated from the interstellar absorption in the STIS data. Again we see the agreement between GHRS and STIS observations of the same absorption lines is excellent. There is no compelling reason, given the comparisons shown in Figures 17 and 18, to believe the earlier GHRS data provide different results than the STIS data for this sightline as suggested by Sahu et al. (1999). ## 6 Summary and Recommendations We have presented a simple approach to estimating the on-order background spectrum in the echelle modes of STIS. Our algorithm fits the cross-dispersion profile of the inter-order light and uses this fit to estimate the on-order background. The resulting background-corrected net spectra show strong interstellar lines with zero residual flux, as expected. The most important aspects of our algorithm, and of the STIS echelle background in general, are as follows. 1. STIS echelle data contain significant amounts of scattered light. The amount of scattered light present depends upon wavelength. The ratio of the on-order background to gross spectra, $`b_\lambda /g_\lambda `$, varies as roughly $`\lambda ^3`$ and ranges from $`0.1`$ at long wavelengths to $`0.5`$ at short wavelengths. 2. The effectiveness of the background-correction algorithm presented here depends upon wavelength and the signal-to-noise ratio of the spectrum being analyzed. Short wavelength STIS echelle data, $`\lambda 1300`$ Å, often show some residual artifacts in the cores of saturated interstellar absorption lines, though the difficulties are most important for wavelengths shortward of Ly$`\alpha `$. This is in part due to the close spacing of the spectral orders in STIS echelle mode observations at short wavelengths. Low signal-to-noise ratio data cause greater difficulties in the background fitting process. The over-subtraction of portions of short wavelength saturated lines is typically $`12\%`$ of the local continuum. 3. Scattered light and the uncertainties involved in removing it can affect spectral line analyses and must be taken into account when analyzing lines with substantial optical depths. The potential systematic uncertainties in the optical depths for lines with (non-zero) residual fluxes $`10\%`$ of the local continuum can be $`10\%`$. 4. Comparisons of high-resolution STIS data with GHRS high- and intermediate-resolution data show no evidence for a significant difference between STIS data reduced with our background subtraction technique and GHRS data using standard background-subtraction techniques for that instrument (Cardelli et al. 1990, 1993). We believe our data extraction routine provides reliable net spectra for the high-resolution modes of STIS, within the framework of the caveats and problems outlined in this work. However, we believe that much work still needs to be done to fully understand the STIS background. Given the possible importance of a reduction scheme such as this, even though it is admittedly flawed in some respects, we will make our IDL extraction routines available to the general astronomical community. Please contact the authors for more information. We appreciate useful comments from C. Churchill, B. Savage, and E. Jenkins. We thank A. Vidal-Madjar for allowing us to compare GHRS Ly$`\alpha `$ profile for G191-B2B with his own. We also appreciate J. Valenti for allowing us to test our procedure on his STIS data for $`\alpha `$ Cen. We acknowledge support from NASA Long Term Space Astrophysics grant NAG5-3485 and grant GO-0720.01-96A from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.
no-problem/9912/cond-mat9912302.html
ar5iv
text
# Measurements of critical current diffraction patterns in annular Josephson junctions ## Abstract We report systematic measurements of the critical current versus magnetic field patterns of annular Josephson junctions in a wide magnetic field range. A modulation of the envelope of the pattern, which depends on the junction width, is observed. The data are compared with theory and good agreement is found. Large area Josephson junctions are intriguing objects for performing experiments on nonlinear electrodynamics. In particular, the propagation of solitons, also called Josephson vortices or fluxons, has attracted a lot of attention and has been studied in detail. Recently large area Josephson junctions have been proposed to be used as efficient radiation and particle detectors. In such junctions, the spatial dependence of the phase difference between the superconducting electrodes is an important characteristic that determines the junction properties. In an annular Josephson junction, magnetic flux quanta threading one superconducting loop but not the other, are trapped and stored in the junction due to the fluxoid quantization. This property of the system offers the unique possibility to study fluxon dynamics in the absence of collisions with boundaries. In the annular junctions proposed for radiation and particle detection, trapped vortices are useful to suppress the critical current of the junction in order to allow for a stable bias point in the subgap region of the junction current-voltage characteristic. In this report, we present systematic measurements of the critical current of annular Josephson junctions in dependence on the externally applied in-plane magnetic field. The critical current $`I_c`$ of a junction without trapped fluxons is at maximum when no magnetic fields are present. In the presence of a magnetic field this maximum superconducting current is reduced. Magnetic fields can be due to the bias current applied to the junction (self-fields), due to flux trapped in the junction itself or its leads (Josephson or Abrikosov vortices, respectively), or they can be applied externally. The modulation of the critical current with the external field is often called a critical current diffraction pattern. We investigate these patterns for annular junctions of various dimensions in a wide range of magnetic fields. We present experimental data on five annular Josephson junctions with the same external radius $`r_e=50\mu \mathrm{m}`$ but different inner radii $`r_i`$ ranging from 30 to 47 $`\mu `$m, see second and third column of Tab. I. Hence the width $`w=r_er_i`$ of the junctions is ranging from $`3`$ to $`20\mu \mathrm{m}`$. The junction geometry is shown in Fig. 1. All junctions have been prepared on the same chip using Hypres technology with a nominal critical current density of $`j_c=100\mathrm{A}/\mathrm{cm}^2`$. Accordingly, the Josephson length is approximately $`30\mu \mathrm{m}`$ at 4.2 K. In Fig. 2 the critical current diffraction patterns of the two junctions B and D, being representative for the set of measured samples, are shown. Obviously, a strong dependence of the pattern on the junction width is observed. As expected, the critical current of the junction at zero field scales with the junction size as $`I_c=j_c2\pi (r_e^2r_i^2)`$. Measuring the diffraction patterns in a wide range of magnetic field, two characteristic modulation scales of the critical current are observed. The pattern having a small magnetic field period $`\mathrm{\Delta }H`$ has an envelope of the larger period $`\mathrm{\Delta }H^{}`$ which depends strongly on the width of the junction (compare Figs. 2a and b). The observed critical current diffraction patterns can be qualitatively understood in the following way. The modulation of the period $`\mathrm{\Delta }H`$ is due to the penetration of magnetic flux in the direction perpendicular to the external magnetic field. This period is inversely proportional to the diameter of the junction: $`\mathrm{\Delta }H1/(2r_e)`$. This is analogous to the standard case, where $`\mathrm{\Delta }H`$ is proportional to the reciprocal junctions length in the direction perpendicular to the magnetic field. The minima of the modulation of the period $`\mathrm{\Delta }H^{}`$ occur, when the magnetic flux penetrates the junction strongly also along the *width* of the junction. Therefore, the period $`\mathrm{\Delta }H^{}`$ of the second modulation is proportional to $`1/w`$. By calculating the ratio $$\frac{\mathrm{\Delta }H^{}}{\mathrm{\Delta }H}=\frac{2r_e}{w},$$ (1) for the different junctions, this simple prediction can be quantitatively compared with experiment. As can be seen from the forth and fifth column of Tab. I, Eq. (1) is quite accurately fulfilled for our junctions. The described effect can be illustrated by plotting the supercurrent density $`j_s`$ at different magnetic magnetic fields versus the junction coordinates. At the magnetic field $`H=3.25\mathrm{Oe}<\mathrm{\Delta }H^{}`$, approximately two and a half flux quanta penetrate into the junction cross section $`2r_e`$, as shown in the inset I of Fig. 2b. At the larger field $`H=12.2\mathrm{Oe}>\mathrm{\Delta }H^{}`$, more than one flux quantum penetrates the width cross section of the junction (see inset II of Fig. 2b). Thus, after each period $`\mathrm{\Delta }H^{}`$, one additional flux quantum has penetrated the width of the junction. We note here, that the spatial distribution of the supercurrent density could also be measured in experiment. Several approaches to calculate the critical current diffraction patterns of annular Josephson junctions have been published earlier. Mainly, two different cases have been considered, the long annular Josephson junction with a circumference $`2\pi \overline{r}`$ larger than the Josephson length $`\lambda _J`$ and the small annular Josephson junction where $`2\pi \overline{r}<\lambda _J`$, where $`\overline{r}=(r_i+r_e)/2`$ is the mean radius of the junction. The most complete theoretical description of the critical current diffraction pattern $`I_c(H)`$ of small annular junctions of arbitrary width and number of trapped fluxons $`n`$ is presented by Nappi in Ref. . The dependence of the critical current $`I_c`$ on the magnetic field $`H`$ is given by the formula $$I_c=I_0\left|\frac{2}{1\delta ^2}_\delta ^1xJ_n\left(x\frac{H}{H_0}\right)𝑑x\right|,$$ (2) where $`J_n`$ is $`n`$-th Bessel function of integer order, $`\delta =r_i/r_e`$ is the ratio of inner radius $`r_i`$ to outer radius $`r_e`$ and $`I_0`$ is the maximum superconducting current at zero field. The field $$H_0=\mathrm{\Phi }_0/(2\pi r_e\mu _0\mathrm{\Lambda })$$ (3) is the characteristic magnetic field; $`\mathrm{\Phi }_0`$ is the flux quantum, $`\mu _0`$ is the vacuum permeability and $`\mathrm{\Lambda }`$ is the effective magnetic thickness. For $`n=0`$, the two extreme cases $`\delta 1`$ (see Ref. ) and $`\delta 0`$ (Ref. ) of Eq. (2) have been discussed in the literature. The predictions of Eq. (2) have also been compared to experiments in a relatively small magnetic field range. To our knowledge, there has been no systematic comparison of the theory with experimental data for different junction width in a large field range. Our intention here is to perform such a comparison. In Figure 2, our experimental data are fitted to Eq. (2). In the fitting procedure the values of both $`H_0`$ and $`\delta `$ are determined. Subsequently, the quantities acquired from the fits are labeled by a tilde ($`\stackrel{~}{H}_0`$, $`\stackrel{~}{\delta }`$). For the fit, the initial value of $`\stackrel{~}{\delta }`$ is calculated from the designed geometry of the junction; the initial $`\stackrel{~}{H_0}`$ is calculated according to Eq. (3) assuming the reasonable value of $`200\mathrm{nm}`$ for the magnetic thickness $`\mathrm{\Lambda }`$. Then, the best fit is found by iteratively adjusting $`\stackrel{~}{H_0}`$ and $`\stackrel{~}{\delta }`$. The value of $`\stackrel{~}{H}_0`$ predominantly determines the small period of the critical current modulation $`\mathrm{\Delta }H`$, whereas $`\stackrel{~}{\delta }`$ determines the large modulation scale $`\mathrm{\Delta }H^{}`$. This fact is in agreement with the qualitative discussion above. As can be seen from Fig. 2, excellent agreement between theory and experiment is found. The parameters $`\stackrel{~}{\delta }`$ and $`\stackrel{~}{H}_0`$ determined from the best fits to the data of junctions A to E are quoted in Tab. I. Comparing the values of $`\delta `$ and $`\stackrel{~}{\delta }`$ in Tab. I, we find that $`\stackrel{~}{\delta }<\delta `$ for all junctions. This small but systematic deviation can be explained by assuming a symmetric deviation $`\mathrm{\Delta }r`$ of the junction radii from their designed dimensions (e.g., due to the photolithographic procedure during the preparation). Using this assumption, $`\stackrel{~}{\delta }`$ can be expressed as $$\stackrel{~}{\delta }=\frac{r_i+\mathrm{\Delta }r}{r_e\mathrm{\Delta }r}.$$ (4) From the fits we find that $`\mathrm{\Delta }r`$ varies between $`0.5`$ and $`1.0\mu \mathrm{m}`$ (see Tab. I). This size correction can also be explained as due to a slight over-etching of the trilayers during sample fabrication, that results in a small reduction of the sample size. The obtained $`\mathrm{\Delta }r`$ values agree with the size tolerance quoted by Hypres. According to theory, the quantity $`H_0`$ does only depend on the outer junction radius $`r_e`$ and hence should be identical for all junctions measured. Instead, we find values of $`\stackrel{~}{H}_0`$ from the fits that slightly vary from junction to junction, see Tab. I. Using Eq. (3) the magnetic thickness $`\stackrel{~}{\mathrm{\Lambda }}`$ can be calculated from $`\stackrel{~}{H}_0`$. The values of $`\stackrel{~}{\mathrm{\Lambda }}`$ obtained for each junction are quoted in the last column of Tab. I. The average magnetic thickness is $`\stackrel{~}{\mathrm{\Lambda }}=191\pm 18\mathrm{nm}`$, which is in good agreement with the value of $`\mathrm{\Lambda }2\lambda _L`$ in the thick film limit, yielding a London penetration depth of $`\lambda _L95\mathrm{nm}`$. The scatter observed in $`\stackrel{~}{H}_0`$ (or, equivalently, in $`\stackrel{~}{\mathrm{\Lambda }}`$) may be due to a small number $`m`$ of flux quanta threading the holes of both junction electrodes simultaneously. The critical current diffraction patterns for different values $`m`$ are very similar in their qualitative features, but may differ quantitatively. Preliminary experimental results show that, upon cooling the junction from the normal to the superconducting state in a small residual magnetic field a large number of times and measuring the resulting critical current versus magnetic field, slightly different diffraction patterns depending on the value of $`m`$ are observed. In such measurements we have only observed three different diffraction patterns, despite repeating the described procedure a large number of times. This strongly suggests that this effect is due to magnetic flux threading the junction loop perpendicular to the substrate. At small fields, we observe a systematic deviation of the calculated patterns from the experimental ones. In particular, the first minimum of the critical current appears at larger field values than predicted by the theory. Moreover, the critical current at the first minimum does not fall to zero. Both facts are to be expected for junctions that are not really small in comparison with $`\lambda _J`$. Indeed, the dimensions of our junctions are slightly larger than $`\lambda _J`$. This leads to a inhomogeneous penetration of the magnetic field into the junction at low fields, resulting in an increase of the field value $`H`$ at which the the first minimum of the pattern is observed. The analogous effect is observed in conventional long Josephson junctions. At higher temperatures, the Josephson length $`\lambda _J`$ increases and, hence, the effective size of the junction decreases. In the inset of Fig. 3, the calculated normalized external junction radius $`r_e/\lambda _J`$ is plotted versus temperature, taking into account the temperature dependence of both the critical current density $`j_c(T)`$ and the London penetration depth $`\lambda _L(T)`$. At $`T>7.8\mathrm{K}`$ the normalized radius drops below unity. Therefore, at higher temperatures a better agreement between experimental data and theory can be expected at low fields. This is illustrated in Fig. 3, where the experimental critical current diffraction pattern of junction B is plotted together with a fit for the temperatures $`T=4.0,\mathrm{\hspace{0.17em}7.0},\mathrm{\hspace{0.17em}8.5}\mathrm{K}`$. The fit is made keeping $`\stackrel{~}{\delta }`$ constant for all $`T`$ and adjusting $`\stackrel{~}{H}_0`$. At elevated temperatures, both the position of the first minimum and the modulation depth of the critical current at small fields show better agreement with the theoretical prediction. We have also measured junctions with a single fluxon trapped in the junction barrier ($`n=1`$). As an example, the critical current diffraction pattern of junction B for $`n=1`$ at 4.2 K is shown in Fig. 4. Taking the same fitting parameters as for the case of no trapped fluxon ($`n=0`$), we find as good agreement between the theory and the experimental data as before. The slight differences between the fit and the experimental data are, again, due to the dimensions ($`r_e>\lambda _J`$) of the junction. In the inset of Fig. 4 the supercurrent distribution in junction B at $`H=1.71\mathrm{Oe}`$ calculated according to Eq. (2) is shown. Obviously at this field a number of vortex anti-vortex pairs have penetrated into the junction but the width of the junction is not fully penetrated (compare Fig. 2b, inset II). The symmetry of the current distribution in the junction is broken due to the presence of the trapped vortex. Similar current distributions in the presence of trapped vortices have also been observed in experiment. It is worth to point out that good agreement between theory and experiment in the large field range is found for junctions of a diameter substantially larger than $`\lambda _J`$. At low fields the theory describes well the experiments with $`r_e<\lambda _J`$, as confirmed by our measurements at higher temperatures. Thus, the magnetic properties of the junction are determined rather by the junction radius than by the junction circumference, as already pointed out in Ref. . In summary, we have systematically measured the critical current diffraction patterns of a number of annular junctions of different width, with and without trapped fluxons, in a wide magnetic field range and at different temperatures. The experimental data show a pronounced width dependence that is explained accurately using the existing theory. In particular, a modulation of the envelope of the critical current diffraction pattern is observed for junctions of large width. The period of this modulation depends very sensitively on the normalized junction size described by the parameter $`\delta `$. The method of our data analysis is accurate enough to detect a small reduction of the size of the junction due to the fabrication process.
no-problem/9912/astro-ph9912195.html
ar5iv
text
# Gravitational lensing of Type Ia supernovae ## I Introduction Gravitational lensing has become an increasingly important tool in astrophysics and cosmology. In particular, the effects of lensing has to be taken into account when studying sources at high redshifts. In an inhomogeneous universe, sources may be magnified or demagnified with respect to the case of a homogeneous universe with the same average energy density. The effects of gravitational lensing has been studied numerically by a number of authors, see, e.g., . The most common method traces light rays through inhomogeneous matter distributions obtained from N-body simulations. Lensing effects are accounted for by projecting matter onto lens planes, and using the thin-lens approximation (see, e.g., ). Recently, Holz and Wald (HW; ) have proposed another ray-tracing method for examining lensing effects in inhomogeneous universes. This method can be summarized as follows: First, a Friedmann-Lemaître (FL) background geometry is selected. Inhomogeneities are accounted for by specifying matter distributions in cells with energy density equal to that of the underlying FL model. A light ray is traced backwards to the desired redshift by being sent through a series of cells, each time with a randomly selected impact parameter. After each cell, the FL background is used to update the scale factor and expansion. By using Monte Carlo techniques to trace a large number of light rays, and by appropriate weighting , statistics for the apparent luminosity of the source is obtained. The advantages with this method are that light rays are traced through a three-dimensional matter distribution without projection onto lens planes, thus avoiding any assumptions regarding the accuracy of the thin-lens approximation . Furthermore, the method is flexible in the sense that cells may be taken to represent both galaxies and larger structures with different matter distributions, including non-spherical ones. For instance, HW have performed a number of tests to determine effects of clustering, and argue that this does not significantly affect statistical properties of magnification. They also investigate the case of substructure in the form of compact objects, and conclude that this can be adequately modelled by randomly distributed compact objects of arbitrary mass. It should be pointed out that the method is not well-suited to model clustering on scales larger than cell sizes. Still, galaxy clusters can be modelled by specifying appropriate masses with corresponding larger cells. Another drawback is that the method only considers infinitesimal ray bundles, making it impossible to keep track of multiple images. However, it is still possible to distinguish between primary images and images that have gone through one or several caustics . HW considered pressure-less models with a cosmological constant, using the following matter distributions: point masses; singular, truncated isothermal spheres (SIS); uniform spheres; and uniform cylinders. The individual masses were determined from the underlying FL model using a fixed co-moving cell radius of $`R_c=2\text{ Mpc}`$, reflecting typical galaxy-galaxy separation length-scales. The aim of this paper is to allow for matter distributions more accurately describing the actual properties of galaxies. We will extend the list of matter distributions to include the density profile proposed by Navarro, Frenk and White (NFW; ) and we will use a distribution of galaxy masses. Also, other matter distribution parameters such as the scale radius of the NFW halo and the cut-off radius of the SIS halo will be determined from distributions reflecting real galaxy properties. The method of HW has also been generalized in Bergström et al. to allow for general perfect fluids with non-vanishing pressure. Gravitational lensing effects may be of importance when, e.g., trying to determine cosmological parameters using observations of supernovae at high redshifts . In this paper, we study the effect from lensing on the luminosity distribution of a large sample of Type Ia supernovae at redshift $`z=1`$. ## II Mass distribution Realistic modelling of galaxies calls for realistic mass distributions and number densities, i.e., one has to allow for the possibility of the cell radius, $`R_c`$, to reflect the actual distances between galaxies. An advantage of the method of HW is that it is very easy to allow for any mass distribution and number density, including possible redshift dependencies, as long as the average density agrees with the underlying FL-model. Thus, for each cell we obtain a random mass, $`M`$, from a galaxy mass distribution $`dn/dM`$, and calculate the corresponding radius from the condition that the average energy density in the cell should be equal to the average matter density of the universe at the redshift of the cell: $$M=\frac{4\pi }{3}\mathrm{\Omega }_M\rho _{\mathrm{crit}}R_c^3,$$ (1) where $`\mathrm{\Omega }_M`$ is the normalized matter density, and $`\rho _{\mathrm{crit}}=3H^2/8\pi `$ is the critical density. A galaxy mass distribution can be obtained, for example, by combining the Schechter luminosity function (see, e.g., Peebles , Eq. 5.129) $`dn`$ $`=`$ $`\varphi _{}y^\alpha e^ydy,`$ (2) $`y`$ $`=`$ $`{\displaystyle \frac{L}{L_{}}},`$ (3) with the mass-to-luminosity ratio (see, e.g., Peebles , Eq. 3.39) normalized to a “characteristic” galaxy with $`L=L_{}`$ and $`M=M_{}`$, $$\frac{M}{M_{}}=y^{1/(1\beta )}.$$ (4) Using Eq. (2), we find that $`{\displaystyle \frac{dn}{dM}}`$ $``$ $`y^\delta e^y,`$ (5) $`\delta `$ $`=`$ $`\alpha {\displaystyle \frac{\beta }{1\beta }}.`$ (6) Assuming that the entire mass of the universe resides in galaxy halos we can write $$_{y_{\mathrm{min}}}^{y_{\mathrm{max}}}n(y)M(y)𝑑y=\rho _m.$$ (7) Using the Schechter luminosity function and the mass-to-luminosity fraction we get $$M_{}=\frac{\mathrm{\Omega }_M\rho _{\mathrm{crit}}}{n_{}_{y_{\mathrm{min}}}^{y_{\mathrm{max}}}y^{\alpha +\frac{1}{1\beta }}e^y𝑑y}.$$ (8) Thus, by supplying values for $`n_{}`$, reasonably well-determined by observations, $`y_{\mathrm{min}}`$ and $`y_{\mathrm{max}}`$, from which the dependence of $`M_{}`$ is weak, together with parameters $`\alpha `$ and $`\beta `$ we can obtain a $`M_{}`$ consistent with $`\mathrm{\Omega }_M`$. For the parameter values used in this paper (see Sec. V), we get $$M_{}7.5\mathrm{\Omega }_M10^{13}M_{}.$$ (9) ## III The Navarro-Frenk-White distribution In the work of HW, the treatment of realistic galaxy models has been limited to the use of the singular, truncated isothermal sphere (SIS). Another often-used matter distribution is the one based on the results of detailed N-body simulations of structure formation by Navarro, Frenk and White . The NFW density profile is given by $$\rho (r)=\frac{\rho _{\mathrm{crit}}\delta _c}{(r/R_s)\left[1+(r/R_s)\right]^2},$$ (10) where $`\delta _c`$ is a dimensionless density parameter and $`R_s`$ is a characteristic radius. The potential for this density profile is given by $$\mathrm{\Phi }(r)=4\pi \rho _{\mathrm{crit}}\delta _cR_s^2\frac{\mathrm{ln}(1+x)}{x}+\mathrm{const}.,$$ (11) where $`x=r/R_s`$. The matrix $`J_\beta ^\alpha `$, describing the evolution of a light beam passing through a cell \[see Eq. (37) in HW\], can then be obtained analytically, see . The mass inside radius $`r`$ of a NFW halo is given by $$M(r)=4\pi \rho _{\mathrm{crit}}\delta _cR_s^3\left[\mathrm{ln}(1+x)\frac{x}{1+x}\right].$$ (12) Combining this expression with Eq. (1), i.e., setting $`M=M(r)`$, we obtain $$\delta _c=\frac{\mathrm{\Omega }_M}{3}\frac{x_c^3}{\left[\mathrm{ln}(1+x_c)\frac{x_c}{1+x_c}\right]}$$ (13) where $`x_c=R_c/R_s`$. That is, for a given mass $`M`$, $`\delta _c`$ is a function of $`R_s`$. From the numerical simulations of NFW we also get a relation between $`\delta _c`$ and $`R_s`$. This relation is computed numerically by a slight modification of a Fortran routine kindly supplied by Julio Navarro. Of course, one wants to find a $`R_s`$ compatible with both the average density in each cell and the numerical simulations of NFW. Hence, we iteratively determine a value of $`R_s`$ consistent with both expressions for $`\delta _c`$. Generally, $`R_s`$ will be a function of mass $`M`$, the Hubble parameter $`h`$, the density parameters $`\mathrm{\Omega }_M`$, $`\mathrm{\Omega }_\mathrm{\Lambda }`$, and the redshift $`z`$. However, we will use the result from Del Popolo and Bullock et al. that $`R_s`$ is approximately constant with redshift. We will compute $`R_s`$ for a variety of $`M`$, $`h`$ and $`\mathrm{\Omega }_M`$ (all at $`z=0`$) in both open and flat cosmologies and interpolate between these values to obtain $`R_s`$ for any combination of parameter values. ## IV Truncation radii for SIS-lenses In their calculations for SIS halos, HW use a fix truncation radius $`d`$. However, using a realistic mass distribution, the cut-off should depend on the mass of the galaxy. Here we derive an expression for $`d`$. The SIS density profile is given by $$\rho _{\mathrm{SIS}}(r)=\frac{\sigma ^2}{2\pi }\frac{1}{r^2},$$ (14) where $`\sigma `$ is the line-of-sight velocity dispersion of the mass particles. The mass of a SIS halo truncated at radius $`d`$ is then given by $$M(d)=_0^r\rho (r)𝑑V=2\sigma ^2d.$$ (15) We want this to be equal to the mass $`M`$ given by the Schechter distribution, $$2\sigma ^2d=Md=\frac{M_{}}{2\sigma _{}^2}\left(\frac{M}{M_{}}\right)\left(\frac{\sigma }{\sigma _{}}\right)^2,$$ (16) where we, in addition to $`M_{}`$, have introduced a characteristic velocity dispersion $`\sigma _{}`$. Combining the Faber-Jackson relation $$\frac{\sigma }{\sigma _{}}=y^\lambda $$ (17) with the mass-to-luminosity ratio, Eq. (4), we can substitute for $`\sigma `$ in Eq. (16), and obtain $$d=\frac{M_{}}{2\sigma _{}^2}\left(\frac{M}{M_{}}\right)^{12\lambda (1\beta )}.$$ (18) Using Eq. (9), we can write the truncation radius for a halo with mass $`M=M_{}`$ as $$d3.3\mathrm{\Omega }_M\mathrm{Mpc}.$$ (19) ## V Results As an application of the method, we investigate lensing effects on observations of distant supernovae. In Fig. 1, we compare the luminosity distributions obtained with point masses, SIS lenses and NFW matter distributions in a $`\mathrm{\Omega }_M=0.3`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$ universe, currently favoured by Type Ia supernova measurements . Sources are assumed to be perfect standard candles. The magnification, given in magnitudes, has its zero point at the filled beam value, i.e., the value one would get in a homogeneous universe. Note that this value is cosmology dependent. Note also that negative values corresponds to demagnifications and positive values to magnifications. The point mass case (full line) agrees well with the results of Holz and Wald. The SIS case (dashed line) is shifted towards the filled beam value when comparing to HW. This is due to the fact that the cut-off radii computed according to Eq. (18) generally is much larger than the fix value of $`d=200`$ kpc used by HW. Note that results for SIS halos and NFW halos are very similar, even when we have no intrinsic luminosity dispersion of the sources. In Fig. 2 we have added an intrinsic luminosity dispersion represented by a Gaussian distribution with $`\sigma _m=0.16`$ mag., due to the fact that Type Ia supernovae are not perfect standard candles. The effect is to make the characteristics of the luminosity distributions even less pronounced, since the form of the resulting luminosity distributions predominantly is determined by the form of the intrinsic luminosity distribution. It is still possible to observationally distinguish whether lenses consist of compact objects or smooth galaxy halos, as has been pointed out in . Generating several samples containing 100 supernova events at $`z=1`$ in a $`\mathrm{\Omega }_M=0.3`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$ cosmology filled with smooth galaxy halos, we find that for 98 % of the samples one can rule out a point-mass distribution with a 99 % confidence level<sup>*</sup><sup>*</sup>*However, in 1 % of the samples, we will erroneously rule out the halo distribution with the same confidence level.. Furthermore, for a similar sample containing 200 supernovae, the confidence level is increased to 99.99 %. We have performed simulations for various cosmologies, and found a substantial difference between SIS halos and NFW halos only in a matter-dominated universe, $`\mathrm{\Omega }_M=1`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$, where the luminosity distribution using NFW halos is shifted towards the point-mass case (see Fig. 3). However, adding an intrinsic source dispersion as in Fig. 4, we see that even with the phenomenal statistics of 10 000 sources, it would be a difficult task to distinguish between the two density profiles. The increased number of high-magnification events for NFW halos would probably be the only way to make such a discrimination. In these calculations, we have used the following parameter values (see further ): * $`\beta =0.2`$ * $`\alpha =0.7`$ * $`y_{\mathrm{min}}=0.5`$ * $`y_{\mathrm{max}}=2.0`$ * $`n_{}=1.910^2h^3\mathrm{Mpc}^3`$ * $`\sigma _{}=220\text{ km/s}`$ * $`\lambda =0.25`$ A more extensive discussion of the luminosity distributions of perfect standard candles obtained with the different halo models at different source redshifts can be found in Bergström et al. , where also some analytical fitting formulas for the probability distributions are given. ## VI Discussion In this paper, the method of Holz and Wald has been generalized to allow for matter distributions reflecting the actual properties of galaxies, including the density profile proposed by Navarro, Frenk and White . In order to make matter distributions as realistic as possible, all parameter values in the lens models are obtained from reasonable probability distributions, as derived from observations and N-body simulations. This includes the mass of the galaxies, the truncation radius of SIS lenses and the characteristic radius of NFW halos. One of the virtues of this method is that it can be continuously refined as one gains more information about the matter distribution in the universe from observations. The motivation for these generalizations is to use this method as part of a model for simulation of high-redshift supernova observations. In this paper, we have considered lensing effects on supernova luminosity distributions. Results for different mass distributions in smooth dark matter halos was found to be very similar, making lensing effects predictable for a broad range of density profiles. Furthermore, given a sample of 100 supernovae at $`z1`$, one should be able to discriminate between the case with smooth dark matter halos and the (unlikely) case of having a dominant component of dark matter in point-like objects. ## Acknowledgements The authors would like to thank Daniel Holz and Bob Wald for helpful comments, and Julio Navarro for providing his numerical code relating the parameters of the NFW model.
no-problem/9912/quant-ph9912032.html
ar5iv
text
# Untitled Document Functional Inversion for Potentials in Quantum Mechanics Richard L. Hall Department of Mathematics and Statistics, Concordia University, 1455 de Maisonneuve Boulevard West, Montréal, Québec, Canada H3G 1M8. email: rhall@cicma.concordia.ca Abstract Let $`E=F(v)`$ be the ground-state eigenvalue of the Schrödinger Hamiltonian $`H=\mathrm{\Delta }+vf(x),`$ where the potential shape $`f(x)`$ is symmetric and monotone increasing for $`x>0,`$ and the coupling parameter $`v`$ is positive. If the kinetic potential $`\overline{f}(s)`$ associated with $`f(x)`$ is defined by the transformation $`\overline{f}(s)=F^{}(v),s=F(v)vF^{}(v),`$ then $`f`$ can be reconstructed from $`F`$ by the sequence $`f^{[n+1]}=\overline{f}\overline{f}^{[n]^1}f^{[n]}.`$ Convergence is proved for special classes of potential shape; for other test cases it is demonstrated numerically. The seed potential shape $`f^{[0]}`$ need not be ‘close’ to the limit $`f.`$ PACS 03 65 Ge 1. Introduction We consider the Schrödinger operator $$H=\mathrm{\Delta }+vf(x)$$ $`(1.1)`$ defined on some suitable domain in $`L^2(\text{ }\text{R}).`$ The potential has two aspects: an ‘attractive’ potential shape $`f(x),`$ and a coupling parameter $`v>0.`$ We assume that $`f(x)`$ is symmetric, non-constant, and monotone increasing for $`x>0.`$ Elementary trial functions can be designed for such an operator to prove that for each $`v>0`$ there is a discrete eigenvalue $`E=F(v)`$ at the bottom of the spectrum. If, in addition to the the above properties, we also assume that $`f`$ is continuous at $`x=0`$ and that it is piecewise analytic, then we are able to prove that $`f`$ is uniquely determined by $`F.`$ The subject of the present paper is the reconstruction of the potential shape $`f`$ from knowledge of the ‘energy trajectory’ $`F.`$ This is an example of what we call ‘geometric spectral inversion’ . Geometric spectral inversion should be distinguished from the ‘inverse problem in the coupling constant’ which has been analysed in detail by Chadan et al \[3-8\]. In the latter problem the discrete part of the input data consists of the set $`\{v_i\}`$ of values of the coupling that yield a given fixed energy $`E.`$ Inversion from excited-state energy trajectories $`F_k(v),k>0,`$ has also been studied, by a complete inversion of the WKB approximation for bound states . For the ground-state energy trajectory $`F(v)=F_0(v)`$ a constructive numerical inversion algorithm has been devised , and an inversion inequality has been established . The work reported in the present paper also concerns inversion from the ground-state energy trajectory, but the approach uses functional methods which have a natural extension to the higher energy trajectories. Geometry is involved with this problem because we deal with a family of operators depending on a continuous parameter $`v.`$ This immediately leads to a family of spectral manifolds, and, more particularly, to the consideration of smooth transformations of potentials, and to the transformations which they in turn induce on the spectral manifolds. This is the environment in which we are able to construct the following functonal inversion sequence that is the central theme of the present paper: $$f^{[n+1]}=\overline{f}\overline{f}^{[n]^1}f^{[n]}\overline{f}K^{[n]}.$$ $`(1.2)`$ A kinetic potential is the constrained mean value of the potential shape $`\overline{f}(s)=<f>,`$ where the corresponding mean kinetic energy $`s=<\mathrm{\Delta }>`$ is held constant. It turns out that kinetic potentials may be obtained from the corresponding energy trajectory $`F`$ by what is essentially a Legendre transformation $`\overline{f}F`$ given by $$\{\overline{f}(s)=F^{}(v),s=F(v)vF^{}(v)\}\{F(v)/v=\overline{f}(s)s\overline{f}^{}(s),1/v=\overline{f}^{}(s)\}.$$ $`(1.3)`$ As we shall explain in more detail in Section 2, these transformations are well defined because of the definite convexities of $`F`$ and $`\overline{f};`$ they complete the definition of the inversion sequence (1.2), up to the choice of a starting seed potential $`f^{[0]}(x).`$ They differ from Legendre transformations only because of our choice of signs. The choice has been made so that the eigenvalue can be written (exactly) in the semi-classical forms $$E=F(v)=\underset{s>0}{\mathrm{min}}\left\{s+v\overline{f}(s)\right\}=\underset{x>0}{\mathrm{min}}\left\{K^{[f]}(x)+vf(x)\right\}$$ $`(1.4)`$ where the kinetic- and potential-energy terms have the ‘usual’ signs. After more than 70 years of QM (and even more of the Sturm-Liouville problem) it may appear to be an extravagance to seek to rewrite the min-max characterization of the spectrum in slightly different forms, with kinetic potentials and K functions. The main reason for our doing this is that the new representations allows us to tackle the following problem: if $`g`$ is a smooth transformation, and we know the spectrum of $`\mathrm{\Delta }+vf^{[0]}(x),`$ what is the spectrum of $`\mathrm{\Delta }+vg(f^{[0]}(x))`$? In the forward direction (obtaining eigenvalues corresponding to a given potential), an approximation called the ‘envelope method’ has been developed . The inversion sequence (1.2) was arrived at by an inversion of envelope theory, yielding a sequence of approximations for an initially unknown transformation $`g`$ satisfying $`f(x)=g(f^{[0]}(x)).`$ In order to make this paper essentially self contained, the representation apparatus is outlined in Section 2. In Section 3 we use envelope theory to generate the inversion sequence. In Section 4 it is proved that the energy trajectory for a pure power potential is inverted from an arbitrary pure-power seed in only two steps: thus $`f^{[2]}=f`$ in these cases. In Section 5 we consider the exactly soluble problem of the sech-squared potential $`f(x)=\mathrm{sech}^2(x).`$ Starting from the seed $`f^{[0]}(x)=1+x^2/20,`$ we are able to construct the first iteration $`f^{[1]}`$ exactly; we then continue the sequence by using numerical methods. This illustration is interesting because the seed potential shape $`f^{[0]}`$ is very different from that of the target $`f(x)`$ and has a completely different, entirely discrete, spectrum. We consider also another sequence in which the seed is $`f^{[0]}(x)=1/(1+x/5).`$ Convergence, which, of course, cannot be proved with the aid of a computer, is strongly indicated by both of these examples. 2. Kinetic potentials and K functions The term ‘kinetic potential’ is short for ‘minimum mean iso-kinetic potential’. If the Hamiltonian is $`H=\mathrm{\Delta }+vf(x),`$ where $`f(x)`$ is potential shape, and $`𝒟(H)L^2(\text{ }\text{R})`$ is the domain of $`H,`$ then the ground-state kinetic potential $`\overline{f}(s)=\overline{f}_0(s)`$ is defined by the expression $$\overline{f}(s)=\underset{\genfrac{}{}{0pt}{}{\genfrac{}{}{0pt}{}{\psi 𝒟\left(H\right)}{(\psi ,\psi )=1}}{(\psi ,\mathrm{\Delta }\psi )=s}}{inf}(\psi ,f\psi ).$$ $`(2.1)`$ The extension of this definition to the higher discrete eigenvalues (for $`v`$ sufficiently large) is straightforward but not explicitely needed in the present paper. The idea is that the min-max computation of the discrete eigenvalues is carried out in two stages: in the first stage (2.1) the mean potential shape is found for each fixed value of the mean kinetic energy $`s;`$ in the second and final stage we minimize over $`s.`$ Thus we have arrive at the semi-classical expression which is the first equality of Eq.(1.4). It is well known that $`F(v)`$ is concave ($`F^{\prime \prime }(v)<0`$) and it follows immediately that $`\overline{f}(s)`$ is convex. More particularly, we have $$F^{\prime \prime }(v)\overline{f}^{\prime \prime }(s)=\frac{1}{v^3}.$$ $`(2.2)`$ Thus, although kinetic potentials are defined by (2.1), the transformations (1.3) may be used in practice to go back and forth between $`F`$ and $`\overline{f}.`$ Kinetic potentials have been used to study smooth transformations of potentials and also linear combinations. The present work is an application of the first kind. Our goal is to devise a method of searching for a transformation $`g,`$ which would convert the initial seed potential $`f^{[0]}(x)`$ into the (unknown) goal $`f(x)=g(f^{[0]}).`$ We shall summarize briefly how one proceeds in the forward direction, to approximate $`F,`$ if we know $`f(x).`$ The $`K`$ functions are then introduced, by a change of variable, so that the potential $`f(x)`$ is exposed and can be extracted in a sequential inversion process. In the forward direction we assume that the lowest eigenvalue $`F^{[0]}(v)`$ of $`H^{[0]}=\mathrm{\Delta }+vf^{[0]}(x)`$ is known for all $`v>0`$ and we assume that $`f(x)`$ is given; hence, since the potentials are symmetric and monotone for $`x>0,`$ we have defined the transformation function $`g.`$ ‘Tangential potentials’ to $`g(f^{[0]})`$ have the form $`a+bf^{[0]}(x),`$ where the coefficients $`a(t)`$ and $`b(t)`$ depend on the point of contact $`x=t`$ of the tangential potential to the graph of $`f(x).`$ Each one of these tangential potentials generates an energy trajectory of the form $`(v)=av+F^{[0]}(bv),`$ and the envelope of this family (with respect to $`t`$) forms an approximation $`F^A(v)`$ to $`F(v).`$ If the transformation $`g`$ has definite convexity, then $`F^A(v)`$ will be either an upper or lower bound to $`F(v).`$ It turns out that all the calculations implied by this envelope approximation can be summarized nicely by kinetic potentials. Thus the whole procedure just described corresponds exactly to the expression: $$\overline{f}\overline{f}^A=g\overline{f}^{[0]},$$ $`(2.3)`$ with $``$ being replaced by an inequality in case $`g`$ has definite convexity. Once we have an approximation $`\overline{f}^A,`$ we immediately recover the corresponding energy trajectory $`F^A`$ from the general minimization formula (1.4). The formulation that reveals the potential shape is obtained when we use $`x`$ instead of $`s`$ as the minimization parameter. We achieve this by the following general definition of $`x`$ and of the $`K`$ function associated with $`f:`$ $$f(x)=\overline{f}(s),K^{[f]}(x)=\overline{f}^1(f(x)).$$ $`(2.4)`$ The monotonicity of $`f(x)`$ and of $`\overline{f}`$ guarantee that $`x`$ and $`K`$ are well defined. Since $`\overline{f}^1(f)`$ is a convex function of $`f,`$ the second equality in (1.4) immediately follows . In terms of $`K`$ the envelope approximation (2.3) becomes simply $$K^{[f]}K^{\left[f^{[0]}\right]}.$$ $`(2.5)`$ Thus the envelope approximation involves the use of an approximate $`K`$ function that no longer depends on $`f,`$ and there is now the possibility that we can invert (1.4) to extract an approximation for the potential shape. We end this summary by listing some specific results that we shall need. First of all, the kinetic potentials and $`K`$ functions obey the following elementary shift and scaling laws: $$f(x)a+bf(x/t)\left\{\overline{f}(s)a+b\overline{f}(st^2),K^{[f]}(x)\frac{1}{t^2}K^{[f]}\left(\frac{x}{t}\right)\right\}.$$ $`(2.6)`$ Pure power potentials are important examples which have the following formulas: $$f(x)=|x|^q\left\{\overline{f}(s)=\left(\frac{P}{s^{\frac{1}{2}}}\right)^q,K(x)=\left(\frac{P}{x}\right)^2\right\},$$ $`(2.7)`$ where, if the bottom of the spectrum of $`\mathrm{\Delta }+|x|^q`$ is $`E(q),`$ then the $`P`$ numbers are given by the folowing expressions with $`n=0:`$ $$P_n(q)=\left|E_n(q)\right|^{\frac{(2+q)}{2q}}\left[\frac{2}{2+q}\right]^{\frac{1}{q}}\left[\frac{|q|}{2+q}\right]^{\frac{1}{2}},q0.$$ $`(2.8)`$ We have allowed for $`q<0`$ and for higher eigenvalues since the formulas are essentially the same. The $`P_n(q)`$ as functions of q are interesting in themselves : they have been proved to be monotone increasing, they are probably concave, and $`P_n(0)`$ corresponds exactly to the $`\mathrm{log}`$ potential. By contrast the $`E_n(q)`$ are not so smooth: for example, they have infinite slopes at $`q=0.`$ But this is another story. An important observation is that the $`K`$ functions for the pure powers are all of the form $`(P(q)/x)^2`$ and they are invariant with respect to both potential shifts and multipliers: thus $`a+b|x|^q`$ has the same $`K`$ function as does $`|x|^q.`$ For the harmonic oscillator $`P_n(2)=(n+\frac{1}{2})^2,n=0,1,2,\mathrm{}.`$ Other specific examples may be found in the references cited. The last formulas we shall need are those for the ground state of the sech-squared potential: $$f(x)=\mathrm{sech}^2(x)\left\{\overline{f}(s)=\frac{2s}{(s+s^2)^{\frac{1}{2}}+s},K(x)=\mathrm{sinh}^2(2x)\right\}.$$ $`(2.9)`$ 3. The inversion sequence The inversion sequence (1.2) is based on the following idea. The goal is to find a transformation $`g`$ so that $`f=gf^{[0]}.`$ We choose a seed $`f^{[0]},`$ but, of course, $`f`$ is unknown. In so far as the envelope approximation with $`f^{[0]}`$ as a basis is ‘good’, then an approximation $`g^{[1]}`$ for $`g`$ would be given by $`\overline{f}=g^{[1]}\overline{f}^{[0]}.`$ Thus we have $$gg^{[1]}=\overline{f}\overline{f}^{[0]^1}.$$ $`(3.1)`$ Applying this approximate transformation to the seed we find: $$ff^{[1]}=g^{[1]}f^{[0]}=\overline{f}\overline{f}^{[0]^1}f^{[0]}=\overline{f}K^{[0]}.$$ $`(3.2)`$ We now use $`f^{[1]}`$ as the basis for another envelope approximation, and, by repetition, we have the ansatz (1.2), that is to say $$f^{[n+1]}=\overline{f}\overline{f}^{[n]^1}f^{[n]}=\overline{f}K^{[n]}.$$ $`(3.3)`$ A useful practical device is to invert the second expression for $`F`$ given in (1.4) to obtain $$K^{[f]}(x)=\underset{v>0}{\mathrm{max}}\left\{F(v)vf(x)\right\}.$$ $`(3.4)`$ The concavity of $`F(v)`$ explains the $`\mathrm{max}`$ in this inversion, which, as it stands, is exact. In a situation where $`f`$ is unknown, we have $`f`$ on both sides and nothing can be done with this formal result. However, in the inversion sequence which we are considering, (3.4) is extremely useful. If we re-write (3.4) for stage \[n\] of the inversion sequence it becomes: $$K^{[n]}(x)=\underset{v>0}{\mathrm{max}}\left\{F^{[n]}(v)vf^{[n]}(x)\right\}.$$ $`(3.5)`$ In this application, the current potential shape $`f^{[n]}`$ and consequently $`F^{[n]}(v)`$ can be found (by shooting methods) for each value of $`v.`$ The minimization can then be performed even without differentiation (for example, by using a Fibonacci search) and this is a much more effective method for $`K^{[n]}=\overline{f}^{[n]^1}f^{[n]}`$ than finding $`\overline{f}^{[n]}(s),`$ finding the functional inverse, and applying the result to $`f^{[n]}.`$ 4. Inversion for pure powers We now treat the case of pure-power potentials given by $$f(x)=A+B|x|^q,q>0,$$ $`(4.1)`$ where $`A`$ and $`B>0`$ are arbitrary and fixed. We shall prove that, starting from another pure power as a seed, the inversion sequence converges in just two steps. The exact energy trajectory $`F(v)`$ for the potential (4.1) is assumed known. Hence, so is the exact kinetic potential given by (2.7) and the general scaling rule (2.6), that is to say $$\overline{f}(s)=A+B\left(\frac{P(q)}{s^{\frac{1}{2}}}\right)^q.$$ $`(4.2)`$ We now suppose that a pure power is also used as a seed, thus we have $$f^{[0]}(x)=a+b|x|^pK^{[0]}(x)=\left(\frac{P(p)}{x}\right)^2,$$ $`(4.3)`$ where the parameters $`a,b>0,p>0`$ are arbitrary and fixed. The first step of the inversion (1.4) therefore yields $$f^{[1]}(x)=\left(\overline{f}K^{[0]}\right)(x)=A+B\left(\frac{P(q)|x|}{P(p)}\right)^q.$$ $`(4.4)`$ The approximate potential $`f^{[1]}(x)`$ now has the correct $`x`$ power dependence but has the wrong multiplying factor. Because of the invariance of the $`K`$ functions to multipliers, this error is completely corrected at the next step, yielding: $$K^{[1]}(x)=\left(\frac{P(q)}{x}\right)^2f^{[2]}(x)=\left(\overline{f}K^{[1]}\right)(x)=A+B|x|^q.$$ $`[4.5]`$ This establishes our claim that power potentials are inverted without error in exactly two steps. The implications of this result are a little wider than one might first suspect. If the potential that is being reconstructed has the asymptotic form of a pure power for small or large $`x,`$ say, then we know that the inversion sequence will very quickly produce an accurate approximation for that part of the potential shape. More generally, since the first step of the inversion process involves the construction of $`K^{[0]},`$ the general invariance property $`K^{[a+bf]}=K^{[f]}`$ given in (2.6) means that the seed potential $`f^{[0]}`$ may be chosen without special consideration to gross features of $`f`$ already arrived at by other methods. For example, the area (if the potential has area), or the starting value $`f(0)`$ need not be incorporated in $`f^{[0]},`$ say, by adjusting $`a`$ and $`b.`$ 5. More general examples We consider the problem of reconstructing the sech-squared potential $`f(x)=\mathrm{sech}^2(x).`$ We assume that the corresponding exact energy trajectory $`F(v)`$ and, consequently, the kinetic potential $`\overline{f}(s)`$ are known. Thus : $$f(x)=\mathrm{sech}^2(x)\left\{F(v)=\left((v+\frac{1}{4})^{\frac{1}{2}}\frac{1}{2}\right)^2,\overline{f}(s)=\frac{2s}{(s+s^2)^{\frac{1}{2}}+s}\right\}.$$ $`(5.1)`$ We study two seeds. The first seed is essentially $`x^2,`$ but we use a scaled version of this for the purpose of illustration in Fig.(1). Thus we have $$f^{[0]}=1+\frac{x^2}{20}K^{[0]}(x)=\frac{1}{4x^2}$$ $`(5.1)`$ This potential generates the exact eigenvalue $$F^{[0]}(v)=v+\left(\frac{v}{20}\right)^{\frac{1}{2}}$$ $`(5.2)`$ which, like the potential itself, is very different from that of the goal. After the first iteration we obtain $$f^{[1]}(x)=\overline{f}\left(K^{[0]}(x)\right)=\frac{2}{1+(1+4x^2)^{\frac{1}{2}}}.$$ $`(5.3)`$ A graph of this potential is shown as $`f1`$ in Fig.(1). In order to continue analytically we would need to solve the problem with Hamiltonian $`H^{[1]}=\mathrm{\Delta }+vf^{[1]}(x)`$ exactly to find an expression for $`F^{[1]}(v).`$ We know no way of doing this. However, it can be done numerically, with the aid of the inversion formula (3.5) for $`K.`$ The first 5 iterations shown in Fig.(1) suggest convergence of the series. As a second example we consider the initial potential given by $$f^{[0]}(x)=\frac{1}{1+|x|/5}.$$ $`(5.4)`$ In this case none of the steps can be carried out exactly. In Fig.(2) we show the first five iterations. Again, convergence is indicated by this sequence, with considerable progress being made in the first step. The numerical technique needed to solve these problems is not the main point of the present work. However, a few remarks about this are perhaps appropriate. As we showed in Ref., by looking at three large values of $`v`$ we can determine the best model of the form $`f(x)=A+B|x|^q`$ that fits the given $`F(v)`$ for small $`x<x_a.`$ We have used this method here with $`v=10000\times \{1,\frac{1}{2},\frac{1}{4}\}`$ and $`x_a=0.2`$ in all cases. As indicated above, the inversion (3.5) was used to determine each $`K^{[n]}`$ function from the corresponding $`F^{[n]}.`$ For all the graphs shown in the two figures, the range of $`v`$ still to be considered for $`x>x_a`$ turned out to be $`0.0008<v<175.`$ With 40 points beyond $`x_a`$ on each polygonal approximation for $`f^{[n]}(x),`$ a complete set of graphs for one figure took about $`4\frac{1}{2}`$ minutes to compute with a program written in C++ on a PentiumPro running at 200MHz. The exact expression for $`f^{[1]}`$ arising from the harmonic-oscillator starting point was very useful for verifying the behaviour of the program. 6. Conclusion Progress has certainly been made with geometric spectral inversion. The results reported in this paper suggest strongly that in some suitable topology, the inversion sequence (1.4) converges to $`f.`$ The ‘natural’ extension of (1.4) to excited states leads to the conjecture that each of the following sequences converges to $`f:`$ $$f_k^{[n+1]}=\overline{f}_k\overline{f}_k^{[n]^1}f_k^{[n]}\overline{f}K_k^{[n]},k=0,1,2,\mathrm{}$$ $`(6.1)`$ For the examples studied by the inversion of the WKB approximation, inversion improved rapidly as $`k`$ increased and the view of the problem became more ‘classical’. If an energy trajectory $`F(v)`$ is derived from a potential which vanishes at large distances, is bounded, and has area, it is straightforward to ‘normalize’ the function $`F(v)`$ by scaling so that it corresponds to a potential with area $`2`$ and lowest point $`1.`$ The graphs of $`F_0(v)`$ for normalized potentials with square-well, exponential, and sech-squared shapes look very similar: for small $`v`$ they are asymptotically like $`v^2,`$ and for large $`v`$ they satisfy $`lim_v\mathrm{}\{F_0(v)/v\}=1.`$ These asymptotic features are the same for all such normalized potentials. We now know that encoded in the details of these $`F_0(v)`$ curves for intermediate values of $`v`$ are complete recipes for the corresponding potentials. If, as the WKB studies strongly suggest, the code could be unravelled for the excited states too, the situation would become very interesting. What this would mean is that, given any one energy trajectory $`F_k(v),`$ we could reconstruct from it the underlying potential shape $`f(x)`$ and then, by solving the problem in the forward direction, go on to find all the other trajectories $`\{F_j(v)\}_{jk},`$ and all the scattering data. For large $`k`$ this would imply that a classical view alone could determine the potential and hence all the quantum phenomena which it might generate. Acknowledgment Partial financial support of this work under Grant No. GP3438 from the Natural Sciences and Engineering Research Council of Canada is gratefully acknowledged. References R. L. Hall, J. Phys. A:Math. Gen 28, 1771 (1995). R. L. Hall, Phys. Rev. A 50, 2876 (1995). K. Chadan, C. R. Acad. Sci. Paris Sèr. II 299, 271 (1984). K. Chadan and H. Grosse, C. R. Acad. Sci. Paris Sèr. II 299, 1305 (1984). K. Chadan and R. Kobayashi, C. R. Acad. Sci. Paris Sèr. II 303, 329 (1986). K. Chadan and M. Musette, C. R. Acad. Sci. Paris Sèr. II 305, 1409 (1987). K. Chadan and P. C. Sabatier, Inverse Problems in Quantum Scattering Theory (Springer, New York, 1989). The ‘inverse problem in the coupling constant’ is discussed on p406 B. N. Zakhariev and A.A.Suzko, Direct and Inverse Problems: Potentials in Quantum Scattering Theory (Springer, Berlin, 1990). The ‘inverse problem in the coupling constant’ is mentioned on p53 R. L. Hall, Phys. Rev A 51, 1787 (1995). R. L. Hall, J. Math. Phys. 40, 669 (1999). R. L. Hall, J. Math. Phys. 40, 2254 (1999). I. M. Gelfand and S. V. Fomin, Calculus of Variations (Prentice-Hall, Englewood Cliffs, 1963). Legendre transformations are discussed on p 72. R. L. Hall, J. Math. Phys. 25, 2708 (1984). R. L. Hall, J. Math. Phys. 34, 2779 (1993). Figure (1) The energy trajectory $`F`$ for the sech-squared potential $`f(x)=\mathrm{sech}^2(x)`$ is approximately inverted starting from the seed $`f^{[0]}(x)=1+x^2/20.`$ The first step can be completed analytically yielding $`f1=f^{[1]}(x)=2/\{1+\sqrt{1+4x^2}\}.`$ Four more steps $`\{fk=f^{[k]}\}_{k=2}^5`$ of the inversion sequence approaching $`f`$ are performed numerically. Figure (2) The energy trajectory $`F`$ for the sech-squared potential $`f(x)=\mathrm{sech}^2(x)`$ is approximately inverted starting from the seed $`f0=f^{[0]}(x)=1/(1+x/5).`$ The first 5 steps $`\{fk=f^{[k]}\}_{k=1}^5`$ of the inversion sequence approaching $`f`$ are performed numerically.
no-problem/9912/astro-ph9912168.html
ar5iv
text
# The Magellanic Stream and the density of coronal gas in the Galactic halo ## 1. Introduction In current pictures of hierarchical galaxy formation, the initial collapse and continuing accretion of gas-rich fragments produces and maintains an extended halo of diffuse, hot gas surrounding the galaxy (White & Rees 1978; White & Frenk 1991). This gaseous halo fills the dark matter potential, and is roughly in hydrostatic equilibrium out to distances of order the virial radius. The inner, more dense regions cool through thermal brehmsstrahlung and slowly accrete into the central regions of galaxies. In the Milky Way, this scenario predicts gas at a temperature $`T_H10^6\mathrm{K}`$ at distances $`R\mathrm{}>50\mathrm{kpc}`$ from the Galactic center. It is difficult to identify this gas directly from X-ray observations because of the difficulty in determining distances (Snowden 1998). Consequently, although it is clear that there is a diffuse background in the $`0.12.0`$-keV range, it is very difficult to determine how much of the emission is local (within a few hundred parsecs), from an extended halo or extragalactic in origin (Snowden 1998; Snowden et al. 1998). Although most of the emission at $`1/4`$-keV does arise locally (Snowden et al. 1998), most at $`3/4`$-keV does not. This component presumably contains emission originating in the Galactic halo and in extragalactic sources– however, the relative contributions are poorly constrained. Searches for extensive X-ray halos around local, late-type galaxies have also proved unsuccesful. Benson et al. (1999) examined archival ROSAT images of 3 nearby, massive spirals but detected no emission and established upper limits which are far below the predicted luminosities (White & Frenk 1991; Kauffman et al. 1993; Cole et al. 1994). Given the difficulty of observing this gas directly, it it useful to infer its existence and properties indirectly. Some of the best evidence comes from the metal absorption lines associated with galaxies seen in quasar spectra (Bahcall & Spitzer 1999; Steidel 1998) and also seen in high-velocity clouds in the Milky Way (Sembach et al. 1999). The Magellanic Stream offers another potential probe of hot gas in the Milky Way halo. The Stream is a long HI filament, apparently trailing the Magellanic Clouds (Jones, Klemola & Lin 1994) and mostly confined to discrete clouds, which are very similar in properties to other high velocity clouds (Wakker & van Woerden 1997; Blitz et al. 1999). Because it will interact with ambient halo gas, its observable characteristics should constrain the properties of the diffuse medium. Early on, Mathewson et al. (1977) proposed that the Magellanic Stream represents the turbulent wake of the Magellanic Clouds as they pass through a diffuse halo medium; however, Bregman (1979) identified a variety of observational and theoretical difficulties with this model and concluded that the tidal stripping model (Murai & Fujimoto 1980; Lin & Lynden-Bell 1982; Gardiner & Noguchi 1996, most recently) provides a better explanation. Moore & Davis (1994) modified the gas dynamic model to include stripping by an extended ionized disk and drag by a diffuse halo: their model matches the Stream kinematics well by incorporating drag from a diffuse gas distribution at $`50\mathrm{kpc}`$ that satisfies all known limits. However, the model remains controversial (e.g., Wakker & van Woerden 1997), so that inferences about halo gas properties are correspondingly uncertain. In the present paper, we reconcile these competing views of Magellanic Stream formation and, in doing so, establish limits on the density of diffuse gas at the current distance of the Magellanic Clouds. In brief, we show that the motion of individual Stream clouds through ambient, ionized gas is dominated not by drag, but by strong heating from accretion. The accretion of ambient gas heats the cloud through thermalization of the bulk flow and through the ionic and electronic enthalpy of accreted gas. Weak radiative cooling leads to mass loss and cloud evaporation. Requiring cloud survival indicates that only the tidal stripping model for the Magellanic Stream is viable. Furthermore, the survival requirement places strong limits on the density of ionized gas in the halo. The constraint is roughly an order-of-magnitude lower than previously inferred. Discussion of the cloud-gas interaction and the survival constraint is presented in §2. The implications are examined in §3. ## 2. Limits on halo gas The Magellanic Stream is concentrated primarily in a bead-like sequence of 6 discrete clouds at high Galactic latitude (Mathewson et al. 1977). The cloud MS IV is located near the ‘tip’ of the stream at $`\mathrm{}=80^o,b=70^o`$, roughly $`60^o`$ across the sky from the Magellanic Clouds (Cohen 1982). The mean HI column density $`N_H=6\times 10^{19}\mathrm{cm}^2`$ (Mathewson et al. 1977), which is intermediate between the denser clouds that lie close to the LMC and the more diffuse clouds at the very tip of the Stream. However, it is also rather centrally condensed with a peak column density of roughly $`1.3\times 10^{20}\mathrm{cm}^2`$ (Cohen 1982). The cloud has approximate HI mass $`M_c=4500(d/\mathrm{kpc})^2M_{}`$, radius $`R_c=15(d/\mathrm{kpc})\mathrm{pc}`$ and temperature $`T_c=10^4\mathrm{K}`$ as determined from the linewidths (Cohen 1982). Assuming a pure hydrogen cloud, the mass and radius give a mean number density $`n_c=0.27(50\mathrm{kpc}/d)\mathrm{cm}^3`$. The kinematics and age of MS IV depend on the formation model. In the most recent tidal model (Gardiner & Noguchi 1996), the eccentrity of the Magellanic Clouds is relatively small, so that MS IV, having been tidally stripped at perigalacticon roughly $`1.5\mathrm{Gyr}`$ ago and following nearly the same orbit, would have a velocity of $`220\mathrm{km}\mathrm{s}^1`$ at roughly $`50\mathrm{kpc}`$. In the gas drag model, Moore & Davis (1994) argue that the Stream was torn from the Magellanic Clouds during passage at $`65\mathrm{kpc}`$ through an extended, ionized portion of the Galactic disk roughly $`500\mathrm{Myr}`$ ago. From a momentum-conservation argument, they deduce an initial velocity of $`220\mathrm{km}\mathrm{s}^1`$ after separation for MS IV. Additional drag from the ambient halo medium modifies the orbit to give the current radial velocity of $`140\mathrm{km}\mathrm{s}^1`$ with respect to the local standard of rest at a distance $`d20\mathrm{kpc}`$. The transverse velocity is unspecified but must be large ($`340\mathrm{km}\mathrm{s}^1`$) because even a total velocity of $`300\mathrm{km}\mathrm{s}^1`$ implies that the orbital energy has diminished by $`10^{54}\mathrm{erg}`$ since separation. The dissipated energy heats the cloud, which has thermal energy $`E_c=3/2M_c/m_hkT_c10^{51}\mathrm{erg}`$ at $`20\mathrm{kpc}`$– roughly 0.1% of the input energy: the cloud must therefore evaporate. However, as the analysis below shows, if we adopt lower bounds on the velocity and age of MS IV, $`V_c=220\mathrm{km}\mathrm{s}^1`$ and $`t=500\mathrm{Myr}`$, we obtain an upper limit on the gas density at $`50\mathrm{kpc}`$ which is lower than that required to give the drag in the Moore & Davis (1994) model. In addition to the short evaporation timescale, it is difficult to accept the smaller distance to MS IV because, at $`50\mathrm{kpc}`$, the cloud temperature, mass and size put it approximately in virial equilibrium, which naturally explains its centrally condensed appearance. At $`20\mathrm{kpc}`$, the cloud should be unbound and rapidly expanding, unless confined by a strong external pressure. The parameters of the halo gas at either distance are rather uncertain. Following current galaxy formation models (e.g., White & Frenk 1991), we assume that the gas is in quasi-hydrostatic equilibrium in the Galactic potential. The estimated temperature of the halo $`T_H=2.9\times 10^6\mathrm{K}`$ for an isothermal halo with rotation speed $`V_0=220\mathrm{km}\mathrm{s}^1`$. This implies that the sound speed $`c_H=200\mathrm{km}\mathrm{s}^1`$. Halo gas at this distance may rotate with velocities on the order of $`1020\mathrm{km}\mathrm{s}^1`$ leading to a small reduction in its temperature or density. The rotation would have little influence on the Stream-gas interaction since gas rotation would be aligned with the disk while the Stream travels on a nearly polar orbit. Previously, the density of halo gas has been estimated by Moore & Davis (1994) using a drag model to account for the kinematics of the Stream clouds. Matching the kinematics of the Stream requires a gas particle density $`n_H10^4\mathrm{cm}^3`$ at a distance of approximately $`50\mathrm{kpc}`$. As noted above, this approach neglects the energy input into the cloud as it snowplows though the halo medium<sup>1</sup><sup>1</sup>1Although clouds have been modeled as blunt objects (e.g., Benjamin & Danly 1997), the boundary conditions are different. The no-slip and no-penetration boundary conditions are not applicable since the cloud is permeable. Magnetic fields do not prevent penetration and shear stress because of charge transfer.. As we will show below, heating dominates drag and cloud survival against evaporation sets a much stronger limit on the halo gas density. This argument is similar to that posed by Cowie & McKee (1976) in constraining the density of ionized gas in the intergalactic medium based on the timescale for conductive evaporation of neutral clouds. ### 2.1. Energy input and mass loss In its rest frame, the total instantaneous internal energy of the cloud $`E_c=T+W`$, where $`T`$ is the total thermal energy and $`W`$ is the potential energy. The rate of change in energy is determined by energy input and loss: $$\frac{dE_c}{dt}=\rho _Hv_HA(\frac{1}{2}v_H^2+\frac{5}{2}c_H^2)\mathrm{\Lambda }n_Hn_cV\dot{M}(\frac{1}{2}u^2+\frac{5}{2}\mathrm{\Delta }c^2)+\dot{W}.$$ (1) The first term on the right-hand side gives the energy input through the projected surface area $`A=\pi R_c^2`$ of the cloud’s leading edge from halo gas with density $`\rho _H`$, streaming velocity $`v_H`$ and sound speed $`c_H`$; the second term gives the inelastic cooling rate with reaction rate $`\mathrm{\Lambda }`$ in the volume $`V`$ at the cloud surface where halo gas at density $`n_H`$ mixes with cloud material at mean density $`n_c`$; the third term gives the cooling from cloud mass loss at surface velocity $`u`$ and change in enthalpy $`5\mathrm{\Delta }c^2/2`$, where $`\mathrm{\Delta }c^2`$ is the change in the square of the cloud sound speed; the last term gives the rate of change of the potential energy<sup>2</sup><sup>2</sup>2Since $`W=𝑑𝐫\rho \mathrm{\Phi }`$, $`\dot{W}=𝑑𝐫[\dot{\rho }\mathrm{\Phi }+\rho \dot{\mathrm{\Phi }}]`$. Generally, when the cloud loses mass, $`\dot{\rho }<0`$ and $`\dot{\mathrm{\Phi }}>0`$ while $`\rho >0`$ and $`\mathrm{\Phi }<0`$, so that $`\dot{W}>0`$. In a steady state, $`\dot{E}_c0`$ (e.g., Cowie & McKee 1977). As we discuss below, the protons carry roughly 2/3 of the incident energy: all of the energy of bulk flow and half of the enthalpy, which is of the same order. However, at these energies ($`100\mathrm{eV}`$), proton collisions with cloud HI are dominated by charge transfer: inelastic losses are negligible<sup>3</sup><sup>3</sup>3Charge transfer at these relative velocities redistributes particle momentum and energy, rather than creating photons (Janev et al. 1987).. Therefore, neglecting $`\dot{W}`$ heating, we obtain the steady-state mass loss rate $$\dot{M}=\frac{\rho _Hv_HA(\frac{1}{2}v_H^2+\frac{5}{2}c_H^2)}{\frac{1}{2}u^2+\frac{5}{2}\mathrm{\Delta }c^2}.$$ (2) The mass loss rate is equal to the accretion rate times the ratio of specific energy input to specific energy outflow. The outflow velocity $`uv_e`$, the surface escape velocity of the cloud in the tidal field. A typical cloud is loosely bound so that $`v_ev_{therm}`$, the thermal velocity of the cloud. The change in enthalpy is of the same order. For MS IV, this implies that the evaporated mass leaves the cloud with $`u10\mathrm{km}\mathrm{s}^1`$ and $`T2\times 10^4\mathrm{K}`$. Figure 1 shows the mass loss rate for various combinations of relative velocity and ambient gas density. For the minimum relative velocity $`v_H=220\mathrm{km}\mathrm{s}^1`$ and age $`t=500\mathrm{Myr}`$ defined above, $`n_H<10^5\mathrm{cm}^3`$ in order for the cloud to survive. Fig. 1 Cloud mass loss rates (upper panel) and corresponding lifetimes (lower panel) as a function of relative velocity for different halo densities. The horizontal line in the lower panel shows the $`500\mathrm{Myr}`$ cutoff. Lifetimes shorter than this are unlikely. Mass loss is not spherically symmetric as in the evaporation of cold clouds embedded in a hot, diffuse medium (Cowie & Mckee 1977; Balbus & Mckee 1982; Draine & Giuliani 1984). Here the halo gas strongly heats the cloud at the leading edge causing outflow along this surface and ablation of material from the poles (Murali 1999); the interaction is analogous to that of comets with the solar wind, also referred to as a mass loaded flow (Biermann et al. 1967; Wallis & Ong 1975; Galeev et al. 1985). Since the flow of halo gas is roughly supersonic, a bow shock may form; however, the shock will be very weak and approximately at or interior to the leading edge of the cloud because of cooling from mass loading (Wallis & Ong 1975). Nevertheless, the morphology may be detectable through sensitive X-ray or EUV observations. ### 2.2. Accretion and drag Accretion of mass and momentum has only a small effect on the cloud even though energy accretion is significant. The mass accretion rate $$\dot{M}_{acc}=\rho _Hv_HA,$$ (3) while the momentum accretion rate $$\dot{P}_{acc}=\rho _Hv_H^2A.$$ (4) Note that momentum transfer from accretion is equivalent to drag with drag coefficent $`C_D=2`$. At $`v_H=220\mathrm{km}\mathrm{s}^1`$ and $`n_H=10^4\mathrm{cm}^3`$, the mass accretion rate $`\dot{M}=9.0\times 10^4M_{}\mathrm{yr}^1`$. For an accretion time of $`5\times 10^8\mathrm{yr}`$ and neglecting mass loss, the cloud accretes $`5\times 10^5M_{}`$– roughly 5% of its initial mass. Momentum transfer through accretion reduces the velocity by roughly $`10\mathrm{km}\mathrm{s}^1`$. For $`n_H<10^5\mathrm{cm}^3`$, changes in mass and momentum are entirely negligible. ### 2.3. Thermalization and cooling Collisions between incident protons and electrons in the inflowing halo gas and target HI in the cloud thermalize the flow and lead to some excitation and radiative cooling. Estimates of the relevant rates can be obtained by considering the cross-sections or reaction rates for collisions between the incident and target particles given their densities and typical relative velocity. Because the halo gas is so diffuse, $`H^+e^{}`$ scattering is unimportant: $`HH^+`$ and $`He^{}`$ collisions dominate. Momentum transfer through charge exchange between protons and neutral hydrogen atoms thermalizes the halo gas flow at the leading edge of the cloud. Recent plasma calculations by Krstić & Schultz (1998) give momentum transfer cross-sections $`\sigma _{mt}`$ at the appropriate energies using the standard method of partial wave expansions to determine scattering amplitudes (e.g., Landau & Lifschitz 1977). For relative velocities $`v_H200\mathrm{km}\mathrm{s}^1`$ or relative energies $`100\mathrm{eV}`$, $`\sigma _{mt}10^{15}10^{16}\mathrm{cm}^2`$. Thus the mean free path into the cloud $`\lambda =1/n_c\sigma _{mt}10^{16}\mathrm{cm}`$ for $`n_c0.25\mathrm{cm}^3`$. Cooling does little to balance the energy input into the cloud. Janev et al. (1987) provides a compendium of thermal reaction rates $`\sigma v_{rel}`$ as a function of relative energy for excitation, ionization and recombination (inelastic processes) in a wide range of atomic, electronic and ionic collisions. Examining these rates shows that radiation arises purely from collisions between electrons in the inflowing halo gas and neutral hydrogen atoms in the cloud. For $`HH^+`$ collisions in a hydrogen plasma at $`T=10^4\mathrm{K}=1\mathrm{eV}`$ with $`v_H=200\mathrm{km}\mathrm{s}^1`$, the reaction rate for any inelastic excitation from ground-state is less than $`3\times 10^{11}\mathrm{cm}^3\mathrm{s}^1`$ (Janev et al. 1987, pp 115-136), which is entirely negligible. Thus, since the halo gas is so diffuse and energy transfer between electrons and protons is minimal, all the proton energy (roughly 2/3 of the total) in the bulk flow heats the cloud; only electron-neutral collisions can produce radiation. For ground-state excitation and ionization by electrons under these conditions, reaction rates are below $`5\times 10^8\mathrm{cm}^3\mathrm{s}^1`$ (Janev pp. 18-31). Recombination rates are considerably lower: $`\sigma v_{rel}<10^{13}\mathrm{cm}^3\mathrm{s}^1`$ (Janev pp. 32-33) and cannot be important even after thermalization to the outflow temperature of $`1\mathrm{eV}`$. Thus for densities of incident particles $`n_H10^4\mathrm{cm}^3`$ and target particles $`n_c0.25\mathrm{cm}^3`$ with mean relative velocity of order the electron thermal velocity $`v_e`$, which is given by the mean energy $`100\mathrm{eV}`$, the energy lost to inelastic processes $$\mathrm{\Lambda }n_Hn_cV=n_HN_c\underset{i}{}\sigma v_{rel}_i\mathrm{h}\overline{\nu }A<10^{36}\mathrm{erg}\mathrm{s}^1,$$ (5) where $`V=A\lambda `$ and $`N_c=n_c\lambda `$, the neutral column density in the mixing layer. The sum over $`i`$ includes the dominant processes: excitation from $`n=1n=2(s,p)`$, from $`n=1n=3`$ and ionization from $`n=1`$ (Janev et al. 1987, pp. 18-27) at a temperature of $`100\mathrm{eV}`$. We take $`\mathrm{h}\overline{\nu }=10\mathrm{eV}`$ and ignore electron cooling (which reduces the amount of radiation produced) over the mean-free path of the protons in the cloud; therefore the loss rate is an upper limit. After cooling, electrons drop to the thermal velocity of the outflowing gas; the combination of velocity and density are too low to permit any additional inelastic cooling. While the radiation rate is substantial, it is considerably lower than the total rate of energy input into the cloud, which is of order $`10^{38}\mathrm{erg}\mathrm{s}^1`$. Although Weiner & Williams (1996) propose that H$`\alpha `$ from the leading edge of MS IV can be produced collisionally when $`n_H10^4\mathrm{cm}^3`$, the rate estimated here is considerably lower than measured. This in turn suggests that escaping UV photons from the Galactic disk produce the H$`\alpha `$ emission through ionization and recombination (Bland-Hawthorn & Maloney 1999) or possibly through fluorescence. ## 3. Evolution of halo gas In current scenarios of galaxy formation, gas in galactic halos should cool interior to some radius $`r_c`$ which increases with time (e.g., White & Rees 1978; Mo & Miralda-Escudé 1996). For circular velocity $`V_0=220\mathrm{km}\mathrm{s}^1`$, $`r_c250\mathrm{kpc}`$ at the current time. However, given the lack of evidence for cooling flows, it is expected that gas within $`r_c`$ is heated into a constant entropy core (Mo & Miralda-Escudé 1996; Pen 1999), so that, for $`r<r_c`$, the halo gas density $$\rho _H(r)=\rho _H(r_c)\left[1\frac{4}{5}\mathrm{ln}(r/r_c)\right]^{3/2}.$$ (6) where $`\rho _H(r_c)=f_gV_0^2/4\pi Gr_c^2`$, and $`f_g`$ is the fraction of the total halo mass density in gas. If $`f_g`$ equals the Universal baryon fraction, then $`f_g0.05`$ for $`\mathrm{\Omega }_m=1`$ and $`f_g0.15`$ for $`\mathrm{\Omega }_m=0.3`$, where $`\mathrm{\Omega }_m`$ denotes the ratio of total mass density to closure density of the Universe. The constraint on the density derived here suggests that $`f_g\mathrm{}<5\times 10^3`$. Within the context of this density model, this discrepancy with the Universal fraction leads to the possibility that a considerable amount of gas has cooled and formed stars or dark matter (Mo & Miralda-Escudé 1996) or has been expelled by strong heating (e.g., Field & Perronod 1977; Pen 1999). Ultimately, however, it is not clear that this model properly describes the gas distribution in galactic halos. ## 4. Summary We have re-examined the interaction of the Magellanic Stream with ambient gas at large distances in the Galactic halo. Our analysis shows that heating dominates over drag. Therefore, because of their high relative velocities, clouds are prone to evaporation if the ambient gas density is too large. In particular, the requirement that MS IV survives for $`500\mathrm{Myr}`$ at $`220\mathrm{km}\mathrm{s}^1`$ imposes the limit on the density of halo gas at $`50\mathrm{kpc}`$: $`n_H<10^5\mathrm{cm}^3`$. This upper limit is roughly an order-of-magnitude lower than the density determined from the drag model of Moore & Davis (1994), and does not concur with current models of the gas distribution in galactic halos. I am grateful to Mineo Kimura, Hiro Tawara, Predrag Krstić, Neal Katz, Gary Ferland and Ira Wasserman for discussion, numerous helpful suggestions and providing data. I am also indebted to the referee for very constructive criticism. This work was supported by NSERC.
no-problem/9912/quant-ph9912086.html
ar5iv
text
# Untitled Document Programming Pulse Driven Quantum Computers Seth Lloyd Complex Systems Group T-13 and Center for Nonlinear Studies Los Alamos National Laboratory Los Alamos, New Mexico 87545 Abstract: Arrays of weakly–coupled quantum systems can be made to compute by subjecting them to a sequence of electromagnetic pulses of well–defined frequency and length. Such pulsed arrays are true quantum computers: bits can be placed in superpositions of 0 and 1, logical operations take place coherently, and dissipation is required only for error correction. Programming such computers is accomplished by selecting the proper sequence of pulses. Introduction A recent paper proposed a technologically feasible quantum computer.<sup>1</sup> This paper contains proofs of the results set forth in that paper. The proposed computers are composed of arrays of weakly coupled quantum systems that are subjected to a sequence of electromagnetic pulses of well-defined frequency and length. Selective driving of resonances induces a parallel, cellular–automaton like logic on the array, a method proposed by Teich et al.<sup>2</sup> for inducing a logic in arrays of quantum dots. In reference $`1`$, it is shown that this method extends to arrays of quantum systems with generic weak couplings, and that the resulting computers, when operated in a quantum–mechanically coherent fashion, are examples of logically reversible computers that dissipate less than $`k_BT`$ per logical operation; dissipation is only required for error correction.<sup>3-6</sup> In fact, the systems are true quantum computers in the sense of Deutsch:<sup>8</sup> bits can be placed in superpositions of 0 and 1, quantum uncertainty can be used to generate random numbers, and states can be created that exhibit purely quantum–mechanical correlations.<sup>7-12</sup> In this paper, it is shown how such systems can be programmed. A simple sequence of pulses suffices to realize a universal parallel computer. The highly parallel operation of the system also allows fast and robust error–correction routines. A more complicated sequence of pulses instructs the machine to perform arbitrary unitary transformations on collections of quantum bits. How it works For the purposes of exposition, consider a heteropolymer, $`ABCABCABC\mathrm{}`$, in which each unit possesses an electron that has a long–lived excited state. For each unit, $`A,B`$ or $`C`$, call the ground state $`0`$, and the excited state $`1`$. Since the excited states are long–lived, the transition frequencies $`\omega _A,\omega _B`$ and $`\omega _C`$ between the ground and excited states are well-defined. In the absence of any interaction between the units, it is possible to drive transitions between the ground state of a given unit, $`B`$ say, and the excited state by shining light at the resonant frequency $`\omega _B`$ on the polymer.<sup>13-14</sup> Let the light be in the form of a $`\pi `$ pulse, so that $`\mathrm{}^1\stackrel{}{\mu }_B\widehat{e}(t)𝑑t=\pi `$, where $`\mu _B`$ is the induced dipole moment between the ground state and the excited state, $`\widehat{e}`$ is the polarization vector for the light that drives the transition, and $`(t)`$ is the magnitude of the pulse envelope at time $`t`$. If the $`\pi `$ pulse is long compared with $`1/\omega _B`$, so that its frequency is well–defined, and if the polymer is oriented, so that each induced dipole moment along the polymer has the same angle with respect to $`\widehat{e}`$, then its effect is to take each $`B`$ that is in the ground state and put it in the excited state, and to take each $`B`$ in the excited state and put it in the ground state. Now suppose that there are local interactions between the units of the polymer, given by interaction Hamiltonians $`H_{AB},H_{BC},H_{CA}`$. Almost any local interaction will do. Consider first the case in which these interaction Hamiltonians are diagonal in the original energy eigenstates for each unit (the effect of off–diagonal terms is considered below). The only effect of such interactions is to shift the energy levels of each unit as a function of the energy levels of its neighbors, so that the resonant frequency $`\omega _B`$, for instance, takes on a value $`\omega _{01}^B`$ if the $`A`$ on its left is in its ground state and the $`C`$ on its right is in its first excited state. If the resonant frequencies for all transitions are different for different values of a unit’s neighbors, then the transitions can be driven selectively: if a $`\pi `$ pulse with frequency $`\omega _{01}^B`$ is applied to the polymer, then all the the $`B`$’s with an $`A=0`$ on the left and a $`C=1`$ on the right will switch from 0 to 1 and from 1 to 0 are. If all transition frequencies are different, these are the only units that will switch. Each unit that undergoes a transition coherently emits or absorbs a photon of the given frequency: no dissipation takes place in the switching process. Driving transitions selectively by the use of resonant $`\pi `$ pulses induces a parallel logic on the states of the polymer: a particular resonant pulse updates the states of all units of a given type as a function of its previous state and the states of its neighbours. All units of the given type with the same values for their neighbours are updated in the same way. That is, applying a resonant pulse to the polymer effects the action of a cellular automaton rule on the states of units of the polymer.<sup>2,15</sup> The cellular automaton rule is of a particularly simple type: if the neighbours of the unit to be switched take on a specific pair of values, then a permutation that interchanges two states of that unit is induced. Since an arbitrary permutation of $`N`$ states can be built up of successive interchanges of two states, one can by the proper sequence of pulses realize any cellular automaton rule that permutes first the states of one type of unit as a function of its neighbours, then permutes the states of another type of unit, then another, etc. Any reversible cellular automaton rule in which updating takes place by acting first on one type of unit, then another, then another, must be of this form, a permutation $`\pi _{ij}^X`$ that induces the permutation $`\pi _{ij}`$ on the states of all units of type $`X`$ for all different states $`ij`$ of the neighbours of a unit. The system is much more computationally powerful than a simple cellular automaton, since one can change the cellular automaton rule from step to step by changing the sequence of pulses applied. Wolfram<sup>15</sup> discussed variable rule cellular automata in the form of coupled ‘Master–Slave’ automata, in which the state of the Master cellular automaton varies the rule of the Slave automaton. The systems discussed here are more general than Wolfram’s example, however, for the simple reason that the ‘Master’ is the programmer of the computer: any sequence of pulses, any program, is allowed. In fact, as will be shown below, by selecting the sequence of pulses, one can make even the simplest of such systems perform any computation that one desires: pulse driven quantum computers are universal digital computers. It is worth noting that any such variable rule cellular automaton with $`M`$ types of unit, and $`m_i`$ states for the $`i`$-th unit, is equivalent in terms of its logical operation to any other such system with the same $`M`$ and $`m_i`$. In the following exposition of sequences of pulses needed to program such variable rule cellular automata, we will concentrate on automata that are one–dimensional, have two states per site, and have three different types of units, $`A,B,C`$, as above. We will show that even such extremely simple systems can be made to perform arbitrary computations. One–dimensional systems with more states per site, or more types of units, or both, are then also computationally universal. We will also prove computational universality for a one–dimensional system with only two types of units, $`A,B`$, in which $`B`$ has three states, one of which exhibits a fast decay to the ground state. Whether a two–unit, two state reversible variable rule automaton is computationally universal is an open question. Although the exposition here will concentrate on one–dimensional systems, we will also note explicitly when the techniques supplied can be generalized to systems of higher dimension. Loading and unloading information A simple sequence of pulses allows one to load information onto the polymer. There is one unit on the polymer that can be controlled independently — the unit on the end.<sup>2</sup> Simply by virtue of having only one neighbour, the unit on the end in general has different resonant frequencies from all other units of the same type. Suppose this unit is an $`A`$: the resonant frequencies $`\omega _i^{A:end}`$ for this unit are functions only of the state $`i`$ of the $`B`$ on its right. If these resonant frequences are different from the resonant frequencies $`\omega _{ij}^A`$ of the $`A`$’s in the interior of the polymer, then one can switch the end unit from $`0`$ to $`1`$ on its own. Suppose that all units are initially in their ground state. To load a $`1`$ onto the polymer, apply a $`\pi `$ pulse at frequency $`\omega _0^{A:end}`$. This pulse switches the end unit to $`1`$. To move this $`1`$ along the polymer, apply a $`\pi `$ pulse with frequency $`\omega _{10}^B`$. The only $`B`$ that responds to this pulse is the first: it will switch to $`1`$. Now apply a pulse with frequency $`\omega _1^{A:end}`$. This pulse switches the $`A`$ on the end back to $`0`$. (This act of reversibly restoring a bit to zero using a copy of the bit is called ‘uncopying,’ and is typical of reversible computation schemes.<sup>3-6</sup>) To load an arbitrary sequence onto the computer, note first that a sequence of $`\pi `$ pulses with frequencies $`\omega _{10}^B`$, $`\omega _{11}^B`$, $`\omega _{01}^A`$, $`\omega _{11}^A`$, $`\omega _{10}^B`$, $`\omega _{11}^B`$, swaps information between adjacent $`A`$’s and $`B`$’s, taking whatever information is registered in each $`A`$ (except the $`A`$ on the end) and exchanging it for the information in its neighbouring $`B`$. Adding in the middle of this sequence a pulse with frequency $`\omega _1^{A:end}`$ causes the $`A`$ on the end to swap information with the $`B`$ on its right as well. Similarly, one can exchange information between adjacent $`B`$’s and $`C`$’s and $`C`$’s and $`A^{}s`$. (No additional pulses to address the end unit is required for these exchanges.) To load an arbitrary sequence of values, $`a_1b_1c_1\mathrm{}a_nb_nc_n`$ onto the polymer, first load $`b_n`$ onto the $`A`$ on the end. Swap information between $`A`$’s and $`B`$’s, including the $`A`$ on the end. Now load $`a_n`$ onto the $`A`$ on the end. Swapping the information in the $`B`$’s with the information in the $`C`$’s, then the $`A`$’s with the $`B`$’s, then the $`C`$’s with the $`A`$’s, then the $`B`$’s with the $`C`$’s moves the bit $`b_n`$ to the first $`A`$ from the end, and $`a_n`$ to the first $`C`$ from the end. Now load $`b_{n1}`$ onto the $`A`$ on the end. Swap information between $`A`$’s and $`B`$’s, etc. Continuing this process loads $`a_1b_10a_2b_20\mathrm{}a_nb_n0`$ onto the first $`n`$ $`ABC`$’s. To load on the $`c`$’s as well, note that the sequence of swaps, $`B`$’s with $`C`$’s, $`A`$’s with $`B`$’s, has the effect of taking the information on the $`A`$’s and $`B`$’s, and moving each bit one unit to the right, while taking the information on the $`C`$’s, and moving each bit two units to the left. Continuing with the sequence of swaps, $`C`$’s with $`A`$’s, $`B`$’s with $`C`$’s, then $`A`$’s with $`B`$’s, $`C`$’s with $`A`$’s, moves each $`ab`$ pair to the $`AB`$ one triple, $`ABC`$ to the right, while moving each $`C`$ two triples to the left. The same set of swaps in opposite order ondoes the motion, moving each $`ab`$ one triple to the left, and each $`c`$ two triples to the right. It is clear that one can, by the proper sequence of swaps, shift the information in one type of units by any amount with respect to the information in the other types of units (subject to the constraint that the overall ‘center of gravity’ of the array remains fixed, in the sense that sums of the displacements of the information in all types of units remains zero). To load on the $`c`$’s, move the string $`a_1b_1\mathrm{}a_nb_n`$ $`2n`$ units to the right, then add on the $`c`$’s at intervals of 3 units, starting with $`c_n`$, then with $`c_{n1}`$, etc., moving the $`a`$’s and $`b`$’s left by one unit for each shift of the $`c`$’s right by 2 units. When all the $`c`$’s have been loaded, they will be paired with the proper $`a`$’s and $`b`$’s in the first $`n`$ triples. (It is clear from the discussion of loading information above that for an array with $`M`$ different types of units, it is simpler to load information in chunks whose size does not exceed $`M1`$ bits, since $`M1`$ bits can be translated as a block. Whenever possible, we will use schemes for computation that require chunks of no greater than this size. This practice will prove important when error–correction is introduced.) This technique clearly works when there are two or more different types of units. The different types of units can have different numbers states, although the maximum amount of information that can be stored and transferred per unit is limited by the type of unit with the smallest number of states. For arrays of more than one dimension, note that the same type of unit will tend to have distinct resonant frequencies if it is on a corner, edge, face, or in the interior of the array. In addition, two of the same type of unit on two different corners (edges, faces) will tend to have distinct resonant frequencies if the type and configuration of their neighbours are different. To load an arbitrary block of bits onto a multi–dimensional array, one starts at a corner, loads an arbitrary string onto an edge, moves it inward one unit onto a face, loads in the next string, moves the two strings a unit further onto the face, and continues until the face contains the first 2-d cross–section of the block. This cross section can be moved into the interior of the block location while the next 2-d section is built up. Etc. Symmetries of the array can interfere with this process. Unloading information There are several ways to get information off the polymer. All involve a certain amount of redundancy, since detection efficiencies for single photons are not very good. The simplest way is to have many copies of the polymer. The same sequence of pulses will induce the same sequence of bits on each copy. To read a bit, one applies a sequence of pulses that moves it to the end. Then one applies two $`\pi `$ pulses, with frequencies $`\omega _{0,1}^{A:end}`$. If either of these pulses is attenuated, then the bit on the end is a $`1`$; if either is amplified, then the bit is a $`0`$. This method has the disadvantage that all bits must be moved to the end of the polymer to be read. If the light in the $`\pi `$ pulses can be focussed to within a radius of a few wavelengths, information can also be read out in parallel, simply by copying the bit that is to be read out onto all or most units of the same type within a few wavelength neighbourhood, and then seeing whether $`\pi `$ pulses aimed at that neighbourhood are attenuated or amplified. (The error–correction schemes described below already require some redundancy.) Other schemes that require less redundancy exist. For example, if the end unit has a fast decay mode (as described below in the section on dissipation), the signal for a bit being a 0 or 1 can be a photon of a different frequency than that of the switching pulse. Only a small numbers of such photons in a distinct frequency channel need be present to be detected with high accuracy. Computation Once information is loaded onto the polymer, a wide variety of schemes can be used to process it in a useful fashion. It is not difficult to find sequences of pulses that realize members of the following class of parallel processing computers. The polymer is divided up into sections of equal length. By choosing the proper sequence of pulses, and by properly formatting the input information, one can simulate the action of any desired reversible logic circuit on the information within each section. (Since every logical action described up until now is reversible, the entire circuit must be reversible: the logical operation induced by a sequence of $`\pi `$ pulses can be reversed simply by applying the same sequence in reverse order.) The logic circuit realized is, of course, the same for each section, although the initial information on which the circuit acts can be different from section to section. A second sequence of pulses allows each section to exchange an arbitrary number of bits with the sections to its left and right. Input and output can be obtained from the sections on the end, as above, or from each individual section using focussed light. By choosing the proper section size and sequence of pulses, one can then realize a string of identical microprocessors of arbitrary reversible circuitry, each communicating with its neighbours. Such a device is obviously computationally universal, in the sense that one can embed in it the operation of a reversible universal Turing machine. A device with the parallel architecture described here, however, is likely to be considerably more useful than a Turing machine for performing actual computations. The number of pulses required to realize such a machine is proportional to the length of the wires, measured in terms of the number of units over which bits must be transported, and number of logic gates in one microprocessor. We justify the above assertions by giving a recipe for constructing a sequence of pulses that realizes the parallel computer described. The sequence of $`\pi `$ pulses with frequencies, $`\omega _{10}^C`$, $`\omega _{11}^C`$, $`\omega _{11}^B`$, $`\omega _{10}^C`$, $`\omega _{11}^C`$, induces the operation of a Fredkin gate on each triple $`ABC`$: a Fredkin gate is a binary gate with three inputs, $`X,Y,Z`$ and three outputs $`X^{},Y^{},Z^{}`$ in which $`X^{}=X`$, and $`Y^{}=Y`$, $`Z^{}=Z`$ if $`X=0`$; $`Y^{}=Z`$, $`Z^{}=Y`$ if $`X=1`$.<sup>4</sup> (Note that this sequence is closely related to, but simpler than, the set of pulses required to exchange information between the $`B`$’s and the $`C`$’s: the only difference is the lack of the pulse with frequency $`\omega _{01}^B`$). That is, if $`X=0`$, all three inputs go through unchanged; if $`X=1`$, the second and third input are exchanged: a Fredkin gate effects an exchange of information between two units conditioned on the value of a third. Fredkin gates suffice to give the logical operations $`AND`$, $`OR`$, $`NOT`$ and $`FANOUT`$, which in turn form a basis for digital computation. These two operations, exchange of information between adjacent units, and conditional exchange of information, suffice to create the sort of parallel computer described. The trick is encoding the information in the proper format. Any reversible logic circuit can be constructed from Fredkin gates that operate first on a given triplet of bits, then on another triplet of bits, etc. But the sequence of pulses described in the previous paragraph applies a Fredkin gate to all collections of three bits at once. This extreme parallelism can be overcome as follows. Each of the processors in the parallel design described above can be described by the same logical circuit. Consider an implementation of this circuit by Fredkin gates: the circuit design consists of wires that move bits to the proper location, and Fredkin gates that then operate on the the proper bits three by three. There are many possible ways to deliver a sequence of pulses that causes the computer to realize the operation of the desired logic circuit. Here we consider a few of the simplest. Suppose that the implementation of the circuit in terms of Fredkin gates requires $`N`$ bits of input, or in the circuit diagram, $`N`$ wires leading into the circuit. The $`k`$th processor will be realized on the $`2kN`$ to $`2(k+1)N`$ triples $`ABC`$. Let the $`N`$ bits $`x_1,\mathrm{},x_N`$ that are to be input into the $`k`$th processor be loaded onto the first $`N`$ $`A`$ units of this section, and let the $`N+1`$st triple $`ABC`$ contain $`011`$. Let all the remaining units in the section be set to 0. Since the only $`B`$ and $`C`$ units in the entire section that contain a 1 are in the $`N+1`$st unit, and since the information in the $`A`$’s, $`B`$’s and $`C`$’s can be shifted relative to eachother at will, the 1’s in the $`B`$ and $`C`$ can be used as pointers to move bits around, and to locate triples of bits on which one can operate with Fredkin gates. For example, to interchange $`x_i`$ and $`x_j`$, simply shift the information in the $`C`$’s $`N+1i`$ triples to the left. The only triple that has $`C=1`$ is now the $`i`$th triple. Now act on all triples $`ABC`$ with a Fredkin gate with $`C`$ as the control input that determines whether the other two inputs are to be interchanged. Since the $`i`$th triple is the only one in which $`C=1`$, this is the only triple in which anything happens. In this triple, $`x_i`$ is moved from $`A`$ to $`B`$. Now shift the information in the $`B`$’s and $`C`$’s $`ji`$ triples to the right. The $`j`$th triple now contains $`ABC=x_jx_i1`$. Act with a Fredkin gate with $`C`$ as the control input, as before. Once again, there is only one triple in which $`C=1`$: the $`j`$th triple, which after the operation of the gate takes the values $`ABC=x_ix_j1`$. Now shift the information in the $`B`$’s and $`C`$’s $`ji`$ triples to the left and operate with a Fredkin gate again. The $`i`$th triple now contains $`ABC=x_j01`$. Shifting the information in the $`C`$’s $`N+1i`$ triples to the right results in the initial state, but with $`x_i`$ and $`x_j`$ interchanged. The modus operandi is clear: the $`N`$ bits in the $`A`$’s are the sheep, and the two 1’s in the $`B`$ and the $`C`$ are the shepherds. By the method of the previous paragraph, one can move to adjacent triples groups of three bits on which one desires to act with a Fredkin gate. To operate with a Fredkin gate on the three bits $`x,y,z`$ in the $`i`$th, $`i+1`$st and $`i+2`$nd $`A`$’s, one performs the following sequence of operations. First, shift the information in the $`C`$’s $`Ni`$ units to the left, and the information in the $`B`$’s $`Ni1`$ units to the left. The three triples now read $`x00y01z10`$: the $`i+1`$st triple is the only one that has $`C=1`$, and the $`i+2`$nd triple is the only one that has $`B=1`$. Operate on all triples with a Fredkin gate with $`C`$ as the control input: the only triple affected is the $`i+1`$st, which now has $`ABC=0y1`$. Shift the information in the $`C`$’s $`2`$ units to the right: the $`C=1`$ unit is now in the $`i+3`$rd triple, one to the right of the three triples under consideration, which read $`x000y0z10`$. Now operate on all triples with a Fredkin gate with $`B`$ as the control input: the only unit affected is the $`i+2`$nd, which now has $`ABC=01z`$. Shift the $`B`$’s to the left by one triple, and the $`C`$’s to the left by two triples. The three triples now read $`xyz011000`$. Now operate on all triples with a Fredkin gate with $`A`$ as the control. The only triple that can be affected is the $`i`$th: all other triples that have $`A=1`$ have $`B=C=0`$. The new values of $`x,y,z`$ can now be moved back to their original positions simply by undoing the reversible set of operations that brought them together. By the above operations, one can move bits anywhere in the section, act with Fredkin gates on any three bits that one desires, move bits again, act on three more bits, etc. This method allows one to translate the design for any logic circuit composed of Fredkin gates into a sequence of pulses that realizes that logic circuit on the bits of information within a section. In each section, the same logical circuit is realized. After the operation of the circuit has been completed, information can be exchanged between sections very simply. First, identify the bits that need to be transferred to the section on the right. By moving the $`C`$ control unit to the triples in which those bits reside, each of those bits can be transferred from the $`A`$ unit to the $`B`$ unit. Now move the information in the $`B`$’s $`2N`$ triples to the right, and transfer the information in the desired bits from the $`B`$’s back to the $`A`$’s. The transfer of information from one section to the next is complete. An analogous procedure allows the transfer of information to the section on the left. The total number of pulses required to realize a particular ciruit design is proportional both to the number of gates and the length of the wires in the design. The following is an even simpler method for inducing parallel computing. Its drawback is that it takes up more space and requires more pulses to realize than the previous method. Suppose that as before one has the circuit design for the processors in the parallel computer, and that each processor takes $`N`$ bits of input. Let $`m=N/3`$, the smallest integer greater than $`N/3`$. When loading information onto the polymer, place the first $`m`$ bits of information on the $`A`$’s in the section in which the processor is to operate, at intervals of $`m`$, so that the first $`A`$ in the section contains the first bit, the $`m+1`$th $`A`$ contains the second bit, the $`2m+1`$th bit contains the third bit, etc., all other $`A`$’s in the section taking the value 0. Now place the next $`m`$ bits on the $`B`$’s at intervals of $`m+1`$, and the remaining bits on the $`C`$’s at intervals of $`m+2`$. It can be seen immediately that in each section, there is at most one triple $`ABC`$ in which information is stored in adjacent units, and that this is the only triple $`ABC`$ in which there can be more than a single unit that takes the value $`1`$. If the pulses that effect a Fredkin gate are applied to the polymer, then this triple is the only triple of bits in each section whose values can change, since a Fredkin gate changes the values of its inputs only if more than one of those bits takes the value $`1`$. For each Fredkin gate in the circuit diagram for the processor, the exchange operations described above can be used to bring together in the same triple the proper inputs to the gate, and the pulses that effect a Fredkin gate can then be applied. Any reversible logic circuit on the $`N`$ input bits can be realized in this fashion. The exchange operations can then be used to transmit information to the neighbouring processors. The total number of pulses required to enact the circuit is proportional to the number of Fredkin gates, and length of the wires, measured in terms of the number of units over which information must be moved. Both of the methods for inducing parallel computation described above are easily adapted to providing parallel computation in more than one dimension. Quantum computation The resulting computer is not only a universal digital computer, but a true quantum computer. Bits can be placed in superpositions of 0 and 1 by the simple expedient of applying pulses at the proper resonant frequencies, but of length different from that required to fully switch the bit. For example, if in loading information on the polymer, as in the section above, instead of applying a $`\pi `$ pulse, one applies a $`\pi /2`$ pulse of frequency $`\omega _0^{A:end}`$ of length $`T_1`$, the effect is to put the $`A`$ unit on the end in the state, $`1/\sqrt{2}\left(|0+e^{i\varphi _1}|1\right)`$, where $`\varphi _1=\pi /2+\omega _0^{A:end}T_1`$. Applied at a time $`T_2`$ later, a $`\pi `$ pulse of frequency $`\omega _{10}^B`$ and length $`T_3`$ then puts the first two units in the state, $`1/\sqrt{2}\left(|00+e^{i\varphi _2}|11\right)`$, where $`\varphi _2=3\pi /2+\omega _0^{A:end}(T_1+T_2)+(\omega _1^{A:end}+\omega _{10}^B)T_3`$. In fact, by the proper sequence of pulses, it is possible not only to create any quantum state of $`N`$ bits, but to effect any unitary transformation desired on those $`N`$ bits. The proof is by induction: The inductive assumption is that it is possible to perform any unitary transformation on the space spanned by vectors $`|1,\mathrm{},|k`$, where each $`|i`$ is a member of the set, $`\{|00\mathrm{}0,|10\mathrm{}0,\mathrm{},|11\mathrm{}1\}`$. This is clearly true for $`k=2`$, since it is possible by applying a resonant pulse of the proper intensity and length to effect any desired unitary transformation between the states $`|000\mathrm{}0`$ and $`|100\mathrm{}0`$, and since it is possible by the logical operations described above to arrange a sequence of pulses whose only effect is to produce some desired permutation of the states $`\{|00\mathrm{}0,|10\mathrm{}0,\mathrm{},|11\mathrm{}1\}`$.To derive this result, we use methods developed by Bennett.<sup>5</sup> The desired permutation,$`\mathrm{\Pi }`$ can be accomplished by some reversible logical circuit, that takes as input the state $`|i`$, and gives as output the state $`e^{i\varphi }|\mathrm{\Pi }(i),j(i)`$, where $`j(i)`$ is some ‘junk’ information that tells how the computation was performed,<sup>5</sup> and $`\varphi `$ is a phase whose value can be manipulated arbitrarily by varying the length and intensity of the $`\pi `$ pulses used to effect the computation. The pulses can always be delivered in such a way that $`\varphi =0`$, the value which we will assume from this point on. The ‘junk’ can be cleaned up by making a copy of $`\mathrm{\Pi }(i)`$ and undoing the computation, resulting in the state $`|i,\mathrm{\Pi }(i)`$. A second circuit can perform the inverse transformation, $`\mathrm{\Pi }^{}1(i)`$, giving the state $`|i,i,j^{}(i)`$, where $`j^{}(i)`$ is some more ‘junk.’ One of the two copies of $`i`$ can now be used to reversibly ‘uncopy’ the other,<sup>5</sup>, leaving $`|i,j^{}(i)`$, and the inverse transformation can be undone, leaving $`|\mathrm{\Pi }(i)`$. Now assume that the inductive assumption is true for some $`k=n`$: we show that it must be true for $`k=n+1`$. It suffices to show that one can effect a unitary operation $`U`$ on the space spanned by $`\{|1,\mathrm{},|n+1\}`$ that transforms an arbitrary orthonormal basis, $`\{|\psi _1,\mathrm{},|\psi _{n+1}\}`$, for that space so that $`U|\psi _i=|i`$. Let $`|\psi _{n+1}=_{i=1}^{n+1}\alpha _i|i`$. By the inductive hypothesis, one can effect a unitary transformation that leaves $`|n+1`$ fixed (over the course of time $`t`$, $`|n+1`$ acquires a multiplicative phase, $`e^{iE_{n+1}t/\mathrm{}}`$, but by varying the time over which the pulses that effect the transformation are applied, this phase can be made to take on an arbitrary value, which we will take to be 1), and takes $`_{i=1}^n\alpha _i|i`$ to $`\beta |0`$: the result is to take $`|\psi _{n+1}`$ to $`|\psi _{n+1}^{}=\beta |0+\alpha _{n+1}|n+1`$. Now one can effect a unitary transformation that takes $`\beta |0+\alpha _{n+1}|n+1`$ to $`|\psi _{n+1}^{\prime \prime }=|n+1`$. These transformations take the remaining $`|\psi _j`$ to an orthonormal basis $`\{|psi_j^{\prime \prime }\}`$ for the space spanned by $`|1,\mathrm{},|n`$. By the inductive hypothesis, one can effect a unitary transformation that leaves $`|n+1`$ fixed (once again, up to a phase that can be taken to be 1), and takes $`|\psi _j^{\prime \prime }`$ to $`|j`$. The mapping is complete, and the resulting proof shows not only that arbitrary unitary maps can be constructed, but how to construct them. The proposed device is not only a universal digital computer, but a universal quantum analog computer in the sense of Deutsch.<sup>8</sup> The computer can be used to create and manipulate states that exhibit purely quantum–mechanical features, such as Einstein–Podolsky–Rosen correlations that violate Bell’s inequalities.<sup>16-18</sup> In addition, by giving each bit a quantum ‘twist’ when loading it on the computer (for example, by applying a $`\pi /2`$ pulse or a $`3\pi /2`$ pulse at random), information could be encoded and stored in such a way that only the person who knows by how much each bit has been rotated could read the information. All others who try to read it will get no information, and will leave a signature of their attempt to read it in the process, by randomizing the states of the bits.<sup>19</sup> Dissipation and error correction Errors in switching and storing bits are inevitable. It is clear that without a method for error correction, the computer described here will not function. Error correction is a logically irreversible process, and requires dissipation if errors are not to accumulate.<sup>3</sup> If in addition to a long–lived excited state, any of the units possesses an excited state that decays quickly to a long–lived state, this fast decay can be exploited to provide error correction. For example, each $`B`$ could have an additional excited state, 2, that decays to the ground state, 0, in an amount of time short compared with the time in between pulses. Any $`B`$ in a long–lived state, 1, e.g., can be restored to the ground state conditioned on the state of its neighbours by applying pulses with the resonant frequency $`\omega _{ij}^B(12)`$ of the transition between the states 1 and 2 given that its neighbours $`A`$ and $`C`$ are in the states $`i`$ and $`j`$. The pulse need not have a well–defined length, provided that it is long enough to drive the transition efficiently. If just one type of unit has a fast decay of the sort described, then one can realize not only any reversible cellular automaton rule that updates first one type of unit, then another, but any irreversible cellular automaton rule as well. The scheme described above that allows the construction of one–dimensional arrays of arbitrary parallel–processing reversible microprocessors then allows one to produce one–dimensional arrays of arbitrary irreversible microprocessors, each one of which can contain arbitrary error–correcting circuitry. Many error–correcting schemes are possible, using check sums and parity bits<sup>20-21</sup> multiplexing,<sup>22</sup> etc. A particularly simple and robust scheme is given below. For each logically irreversible operation accomplished, a photon is emitted incoherently to the environment. In contrast to the switching of bits using $`\pi `$ pulses, in which photons are emitted and absorbed coherently, the switching of bits using fast decays is inherently dissipative. The amount of dissipation depends on what is done with the incoherently emitted photons. If the photon is absorbed and its energy thermalized, then considerably more than $`k_BT`$ is dissipated; if the energy of the photon is put to work, dissipation can be brought down to close to $`k_BT`$. Such a computer can function reliably in the face of a small error rate in principle. Error correction for the method of computation proposed here takes the place of gain and signal restoration in conventional circuits. Whether such a computer can actually be made to function reliably in the face of a finite error rate depends crucially on whether the error correction routine suffices to correct the number of errors generated in the course of the computational cycle, in between error correction cycles. Suppose that the probability of error per unit per computational cycle is $`ϵ`$. Suppose that all bits come in $`2k+1`$ redundant copies, and that after each cycle, error correction is performed in parallel by having the copies vote amongst eachother as to their proper value, and all copies are restored to that value: there exist quick routines for performing this operation, that are insensitive to errors generated during their execution. The error rate per cycle is reduced to $`\eta (2ϵ)^k`$ by this process. For a computation that uses $`b`$ bits over $`c`$ cycles to have a probability no greater than $`f`$ for the failure of a single bit, we must have $`b(1\eta )^{bc}bf`$, which implies that $`\eta 1/cb^2`$. For example, suppose that the error rate per bit per cycle is a quarter of a percent, $`ϵ=.0025`$. To have a computation involving $`10^{12}`$ bits over $`10^{20}`$ steps have a probability of less than $`1\%`$ of getting a bit wrong requires that each bit have $`47`$ redundant copies. Although such computers have much higher error rates and require much more error correction than conventional computers, because of their high bit density and massively parallel operation, error correction can be carried out without too great a sacrifice in space or time. Robust error correction schemes As noted, a wide variety of error correction schemes are possible. Here we present several schemes that are robust: they correct errors quickly and efficiently, even if errors are committed during their execution. First, we examine the simplest possible form of error correction. If information is stored redundantly in triplicate, so that each $`ABC`$ is supposed to contain the same bit of information, a simple form of error correction is provided by applying a sequence of pulses that restores $`ABC`$ to $`111`$ if at least two of the units are $`1`$, and to $`000`$ if at least two of the units are $`0`$. Suppose that each $`B`$ unit has an excited state, call it 2, that decays to the ground state 0 in an amount of time comparable to the length of the $`\pi `$ pulses used to do switching. To restore $`B`$ to 0 if $`A`$ and $`C`$ are both 0, one simply applies a pulse at the resonant frequency $`\omega _{00}^B(12)`$ of $`B`$’s transition from 1 to 2, given that $`A=0,C=0`$. As noted above, the pulse need not be of any specific length, as long as it is considerably longer than the lifetime of the state 2. This pulse has the following effect: If $`B`$ is initially in the state 0, it remains in that state. If $`B`$ is initially in the state 1, then it is excited to the state 2, and then decays to 0. The net effect is to reset $`B`$ to 0 provided that $`A`$ and $`C`$ are both 0. To restore $`B`$ to 1 if both $`A`$ and $`C`$ are 1, first apply a pulse at the resonant frequency $`\omega _{11}^B(12)`$ of $`B`$’s transition from 1 to 2, given that $`A=1`$, $`C=1`$. This pulse insures that if $`A`$ and $`C`$ are 1, $`B`$ is set to 0. Then apply a $`\pi `$ pulse of frequency $`\omega _{11}^B(01)`$ to take $`B`$ from 0 to 1. The net effect is to reset $`B`$ to 1 provided both $`A`$ and $`C`$ are 1. If $`A`$ and $`C`$ have different values, the sequence of pulses has no effect on $`B`$. A simple sequence of pulses will interchange the bits in $`B`$ and $`C`$. If one interchanges $`B`$ with $`C`$, and performs the resetting procedure of the previous paragraph, then interchanges $`A`$ with $`B`$ and resets $`B`$ once again, the required error correction is accomplished. What is desired is a method of correcting errors by the method of voting, but to have the voting take place over an arbitrary number of copies. One way to do this would be to design a circuit that performs the voting, and then realize it by the method given above. Such a circuit might take a large number of pulses to realize, however, and would moreover be vulnerable to errors committed in the course of its operation. We have designed some quick and dirty methods that perform error correction massively in parallel. The basic idea is to store all bits with $`n`$-fold redundancy, in blocks, to have each bit in the block vote with its neighbours in threes, as above, then to scramble up the bits in the block and to perform the voting again. If the error rate per computational step is $`ϵ`$, and if no errors are made in the voting, then after a single vote, the rate will be $`ϵ^2`$, after two votes, $`ϵ^4`$, etc.: the the effective error rate rapidly drops to zero. If during voting, there is a probability of error per unit of $`\theta `$, then the fraction of units in each block with the ‘wrong’ value eventually converges to $`\theta `$. The process described is insensitive to errors committed during its execution. There are several ways to realize this particular method of error correction. In the methods for inducing computing described above, we have some bits stored in the $`A`$’s, some in the $`B`$’s, and some in the $`C`$’s: Suppose that each bit is stored with $`n`$-fold redundancy, with $`n`$ triples of blank space between each reduntantly registered bit. When error correction begins, then, the data is stored in blocks of $`n`$ triples, and in each block, all the $`A`$’s are supposed to be the same, with all the $`B`$’s and $`C`$’s equal to zero, or all the $`B`$’s are supposed to be the same, with all the $`A`$’s and $`C`$’s equal to zero, etc. Between the blocks of data, there are blocks $`n`$ triples long in which all units are 0. Both of the computational methods described above can be made to work with this formatting of the data. The error correction routine restores first the $`A`$’s to a common value, then the $`B`$’s, then the $`C`$’s. The method is simple: To restore the $`A`$’s, first supply a sequence of pulses that transforms $`ABC=100`$ to $`ABC=111`$, and $`ABC=111`$ to $`ABC=100`$, leaving all other values for $`ABC`$ unchanged. Such a sequence is easy to devise. This transformation has the effect of making each unit in the block in which information is stored in $`A`$’s take on the value of the $`A`$ in the triple, while leaving the blocks in which information is stored in $`B`$’s or $`C`$’s unchanged. One can now make the different $`A`$’s in the block vote three by three amongst themselves. First, shift the information in the $`B`$’s in one direction by some number of triples $`n`$, and shift the $`C`$’s to the opposite direction by some independently chosen number $`n`$; then restore an $`A`$ to one if its neighbours are one, and to zero if its neighbours are zero. By differing the shifts, one can make each $`A`$ in the block vote with any $`A`$ to its left and any $`A`$ to its right. After the voting is done once, one can shift the $`B`$’s and the $`C`$’s back again, and perform the inverse transformation within each triple, mapping $`111`$ to $`100`$, and $`001`$ to $`111`$, leaving other values fixed. The only weakness in this voting scheme occurs at the ends of blocks in which all the $`A`$’s are supposed to be equal to 1: the last $`A`$ on each block doesn’t get to vote. As a result, without some further measure, blocks in which all the $`A`$’s are supposed to be equal to 1 will tend to be eaten away from the ends. This problem is easily remedied by scrambling up the redundant bits in each block. There is a wide variety of ways to induce this scrambling, one of the simplest of which is the following. The redundant bits in blocks where information is stored in the $`A`$’s can be scrambled amongst eachother by inducing the following interaction between that block and a block in which the $`C`$’s are supposed to be equal to 1, (for example, the block of $`C=1`$ that is used to shepherd bits around in the first computing scheme above). First, shift the information in the blocks relative to eachother so that the $`m`$ triples on the right of the block in which information is stored in the $`C`$’s overlaps the first $`m`$ triples on the left of the block in which information is stored in the $`A`$’s, where $`mn/2`$. Enact a Fredkin gate with $`C`$ as the control bit. Since the $`C`$’s are by and large equal to 1, the effect is to interchange the information in the $`m`$ $`A`$’s of the overlapping blocks with the 0’s in the $`m`$ $`B`$’s. Shift the information in the $`B`$’s and $`C`$’s $`m`$ triples to the right, and act with a Fredkin gate with control $`C`$ again. Then shift the information in the $`B`$’s and the $`C`$’s $`m`$ triples to the left, and act with a Fredkin gate with control $`C`$ again. The effect of these actions is as follows: the only place in which anything happens is in the first $`2m`$ triples of the blocks where information is stored in the $`A`$’s, as in all other triples where $`C=1`$, $`A=B=0`$. In such blocks, the effect is to interchange the information that was in the first $`m`$ $`A`$’s of the blocks with the information in the second $`m`$ $`A`$’s of the blocks. Similarly, it is possible to interchange the information in the last $`m`$ $`A`$’s of the blocks with the information in the second to last $`m`$ $`A`$’s of the blocks. By varying $`m`$, one can scramble up the $`A`$’s in any way that one chooses. Similarly, one can use a block of $`A=1`$ to scramble up the bits in blocks in which information is stored in $`B`$’s or $`C`$’s. There are other, more involved scrambling procedures that do not require blocks of $`C=1`$ to scramble up the bits in blocks in which information is stored in the $`B`$’s or $`C`$’s. Combining such a scrambling procedure with the procedure of voting by threes is an effective method for correcting errors quickly as long as each bit has a sufficiently large number of redundant copies that the probability of more than a third of them taking on the wrong value in the course of the computation is small. If the probability of error within a block of $`A`$’s after the computational cycle and before the error correction cycle was $`ϵ`$, and if the probability of error per unit during the error–correction cycle is $`\theta `$, then the probability of error after one round of voting by threes is $`2ϵ^2+\theta `$. More precisely, if the number of incorrect $`A`$’s in the block was initially $`p`$, the probability that an $`A`$ in the block takes on the correct value after voting once by threes is $`1p^2/n^2(2p/n)\theta `$. One can now repeat the process, having each $`A`$ vote with the copies of a different pair of $`A`$’s in the same block. The probability of error is now reduced to $`2(2ϵ^2+\theta )^2+\theta `$. Etc. As long as the number of redundant bits is sufficiently large, this method will rapidly restore all but a fraction $`\theta `$ of the bits in a block to a common value. Here, ‘sufficiently large,’ depends on how many times the voting by triples takes place. Because it takes place in parallel fashion, each voting requires only a short sequence of pulses to realize. For only a small number of votes, five, say, it suffices that the number of incorrect $`A`$’s in the block, $`p`$, never gets larger than $`n/3`$ in the course of the computation. The above method, if error–free, leaves unaffected blocks in which only the $`B`$’s or only the $`C`$’s contain data. If the error rate is $`\theta `$ over a single vote and scramble of the $`A`$’s, then the error–correcting routine for the $`A`$’s will introduce an error rate of $`\mathrm{}\theta `$ per unit of the $`B`$’s and $`C`$’s, if carried out over $`\mathrm{}`$ votes. The same method can now be used to restore first the $`B`$’s, then the $`C`$’s to their proper values. The resulting error correction technique is efficient and robust. It can easily be generalized to higher dimensions, and to more types of units with more states. Destruction of quantum coherence Note that when a photon is emitted incoherently, the quantum coherence of the bit from which it was emitted, and of any other bits correlated with that bit, is destroyed. Incoherent processes and the generation of errors intrinsically limit the number of steps over which the computer can function in a purely quantum–mechanical fashion. Errors There are many potential sources of error in the operation of these pulsed quantum computers. The primary difficulty in the proposed scheme is the delivery of effective $`\pi `$ pulses. Microwave technology can give complete inversion with error rates of a fraction of a percent in NMR systems. Optical systems are at present harder to invert, since bands suffer considerable homogeneous and inhomogeneous broadening. As noted above, a fraction of a percent error per bit per pulse can be tolerated; but a few percent is probably too much. Techniques such as pulse shaping<sup>23</sup> or iterative excitation schemes<sup>24</sup> enhance $`\pi `$ pulse effectiveness and selectivity. If optical systems with sufficiently narrow bands can be found, and if the systems can be well–oriented, so that the coupling with the pulses is uniform, then the rapid advance of laser technology promises soon to reach a level at which $`\pi `$ pulses can be delivered at optical frequencies. In addition to the technological problem of supplying accurate $`\pi `$ pulses, the following fundamental physical effects can cause substantial errors: Effect of off–diagonal terms in interaction Hamiltonians. These terms have a number of effects. The simplest is to induce unwanted switching of individual units, with a probability of error per unit per pulse of $`\left(\delta \omega _{off}/\omega \right)^2`$ whenever a unit or its neighbour is switched. Here $`\mathrm{}\delta \omega _{off}`$ is the characteristic size of the relevant off–diagonal term in the interaction Hamiltonian. Off–diagonal interactions also induce the propagation of excitons along the polymer: this process implies that a localized excited state has an intrinsic finite lifetime equal to the inverse of the bandwidth for the propagation of the exciton associated with that state.<sup>25-26</sup> For the polymer $`ABCABC\mathrm{}`$, the bandwidth associated with the propagation of an excited state of $`A`$ can be calculated either by a decomposition in terms of Bloch states, or by perturbation theory, and is proportional to $`\delta \omega _{off}^{AB}\delta \omega _{off}^{BC}\delta \omega _{off}^{CA}/(\omega _A\omega _B)(\omega _A\omega _C)`$, where $`\mathrm{}\delta \omega _{off}^{AB}`$, e.g., is the size of the term in $`H_{AB}`$ that induces propagation of excitation from $`A`$ to $`B`$. For a polymer of the form $`12\mathrm{}M12\mathrm{}M\mathrm{}`$, the characteristic bandwidth goes as $`\delta \omega _{off}^M/\mathrm{\Delta }\omega ^{M1}`$, where $`\mathrm{\Delta }\mathrm{\Omega }`$ is the typical size of the difference between the resonant frequencies of different types of units. For the computer to function, the inverse of the exciton propagation bandwidth must be much longer than the characteristic switching time. If the off–diagonal terms are of the same size as the on–diagonal terms, on average, then for the computer to function, the overall interaction between units must be weak, and $`M`$, the number of different kinds of units in the polymer, must be at least three. Small off-diagonal terms and a relatively large number of different types of units are essential for the successful operation of the computer. Quantum–electrodynamic effects. The probability of spontaneous emission from a single unit is assumed to be small. In the absence of interactions, the spontaneous decay rate for a unit with resonant frequency $`\omega `$ is $`4\omega ^3\mu ^2/3\mathrm{}c^3`$.<sup>13</sup> If the lifetime of an optical excited state is to be as long as milliseconds, the induced dipole moment $`\mu `$ must be suppressed by symmetry considerations. Interactions between different units of the same type can give rise to quantum–electrodynamic effects such as super–radiance, and the coherent emission of a photon by one unit and coherent reabsorption by another.<sup>27</sup> Fortunately, the states that are being used for computation, in which each unit is in a well–defined excited or ground state, are exactly those that do not give enhanced probabilities for these processes. In the process of switching, however, and when bits are in superpositions of $`|0`$ and $`|1`$, super–radiant emission gives an enhancement of the spontaneous emission rate by a factor of $`n`$, where $`n`$ is the number of units of the same type within a wavelength of the light used. Since the switching time is short compared to the lifetime, super–radiant emission is not a problem. (Though super–radiance can shorten the lifetime of quantum superpositions of logical states.) Nonlocal interactions. Coherent switching will not work unless the shift in a unit’s resonant frequency induced by nearby units of the same type is too small to throw the unit out of resonance. The dipole–dipole couplings of reference 2 fall off as $`1/r^3`$. For such a long range coupling, many different types of units are required, and the result of a resonant pulse is to realize a cellular automaton rule with a neighbourhood of radius larger than one. None of the purely physical effects gives error rates that are insurmountable. If the $`\pi `$ pulses are long compared to the inverse frequency shifts due to interaction, if the unperturbed resonant frequencies differ substantially between the different types of unit, and if the off–diagonal terms in the interaction Hamiltonians are small compared with the resonant frequencies and their differences, then this computing scheme will work in principle. Although putting them together in a working package may prove difficult, precisely timed monochromatic laser pulses, well–oriented polymers, accurately fabricated semi–conductor arrays, and fast, sensitive photodetectors are all available in today’s technology. Continuously tunable ti-sapphire lasers, or diode–pumped YAG lasers tuned by side–band modulation can currently supply frequency–stable picosecond pulses at nanosecond intervals with an integrated intensity that varies by a fraction of a percent. Currently available electro–optical shutters could be used to generate the proper pulse sequence at a nanosecond clock rate. Photodetectors equipped with photomultipliers and acoustic–optical filters can reliably detect tens of photons (or fewer) within a wavelength band a few nanometers wide. Although arrays of quantum dots created by $`X`$-ray lithography are not yet of sufficiently uniform quality, arrays of quantum dots and lines that have been created using interference techniques might be sufficiently uniform to realize the proposed scheme. Numbers The range of speed of operation of such a pulsed quantum computer within acceptable error rates is determined by the frequency of light used to drive transitions, and by the strength and character of the interactions between units. For square-wave pulses, the intrinsic probability of error per unit per pulse due to indiscriminate transition–driving is $`\left(1/T\delta \omega _{on}\right)^2`$, where $`T`$ is the pulse length and $`\delta \omega _{on}`$ is resonant frequency shift induced by on-diagonal terms in the interaction Hamiltonian,<sup>13</sup> (this error can be reduced significantly by using shaped pulses<sup>26</sup>) while the probability of error per unit per pulse due to off–diagonal terms in the interaction Hamiltonians is $`\left(\delta \omega _{off}/\omega \right)^2`$. The decay of localized excitations due to exciton propagation gives a lifetime proportional to $`\mathrm{\Delta }\omega ^{M1}/\delta \omega _{off}^M`$, where $`M`$ is the number of different types of units. Suppose that the excited states have transition frequencies corresponding to light in the visible range, say $`\omega =10^{15}`$ sec<sup>-1</sup>. (Many electronic excited states in molecular systems and quantum dots are in the visible or near–visible range. Visible light is a good range in which to operate, because accurate lasers exist for these frequencies, and because the systems can operate at room temperature.) In the absence of off–diagonal terms in the interaction Hamiltonians, the frequency shifts due to interaction do not need to be small compared to $`\omega `$, and to obtain an intrinsic error rate of less than $`10^6`$ per unit per pulse, the pulse length could be as short as $`10^{12}`$ seconds, and as long as a few thousands of the intrinsic lifetimes of the excited states (assuming that a few thousand pulses are required for error correction). The clock rate of such a computer could be varied to synchronize its input and output with conventional electronic devices. In the presence of off–diagonal terms of the same magnitude $`\delta \omega _{off}\delta \omega _{on}\delta \omega `$ as the on–diagonal terms, to obtain an intrinsic error rate of $`10^6`$ per unit per pulse, one must have $`\delta \omega =10^{12}`$ sec<sup>-1</sup>, and a minimum pulse length of $`10^9`$ seconds. If the computer has three different types of units, the intrinsic exciton lifetime from the local coupling alone is on the order of $`10^6`$ seconds. Actual exciton lifetime will be shorter as a result of coupling to other modes. The more different kinds of units, the more freedom one has to lengthen the clock cycle. If the units in the quantum computer are nuclear spins in an intense magnetic field, with dipole–dipole interactions, then the the pulses will have frequencies in the microwave or radiofrequency region, and the computers will have clock rates from microseconds to milliseconds. Conclusion Computers composed of arrays of pulsed, weakly–coupled quantum systems are physically feasible, and may be realizable with current technology. The units in the array could be quantum dots, nuclear spins, localized electronic states in a polymer, or any multistate quantum system that can be made to interact locally with its neighbours, and can be compelled to switch between states using resonant pulses of light. The exposition here has concentrated on one-dimensional systems with two or three states, but more dimensions, more types of unit, and more states per unit provide higher densities of information storage and a wider range of possibilities for information processing, as long as the different transitions can still be driven discriminately. The small size, high clock speeds and massively parallel operation of these pulsed quantum computers, if realized, would make them good devices for simulating large, homogeneous systems such as lattice gases or fluid flows. But such systems are capable of more than digital computation. When operated coherently, the devices described here are true quantum computers, combining digital and quantum analog capacities, and could be used to create and manipulate complicated many–bit quantum states. Many questions remain: What are the best physical realizations of such systems? (The answer may be different according to whether the devices are to be used for fast, parallel computing, or for generating novel quantum states.) How can they best be programmed? How can noise be suppressed and errors corrected? How can their peculiarly quantum features be exploited? What are the properties of higher dimensional arrays? The device proposed here, as with all devices in the next generation of nanoscale information processing, cannot be built and made to function without addressing fundamental questions in the physics of computation. Acknowledgements: The author would like to thank Günter Mahler, Carlton Caves, Alan Lapedes, Ronald Manieri, and Brosl Hasslacher for helpful discussions. References 1. S. Lloyd, A potentially realizable quantum computer, submitted to Science. 2. K. Obermayer, W.G. Teich, G. Mahler, Phys. Rev. B 37, 8096–8110 (1988). W.G. Teich, K. Obermayer, G. Mahler, Phys. Rev. B 37, 8111–8121 (1988). W.G. Teich, G. Mahler, Phys. Rev. A 45 3300, 1992. 3. R. Landauer, IBM J. Res. Develop. 5, 183–191 (1961). 4. K.K. Likharev, Int. J. Theor. Phys. 21, 311-326 (1982). 5. C.H. Bennett, IBM J. Res. Develop. 17, 525–532 (1973); Int. J. Theor. Phys. 21, 905–940 (1982). 6. E. Fredkin, T. Toffoli, Int. J. Theor. Phys. 21, 219-253 (1982). 7. P. Benioff, J. Stat. Phys. 22, 563–591 (1980); Phys. Rev. Lett. 48, 1581–1585 (1982); J. Stat. Phys. 29, 515–546 (1982); Ann. N.Y. Acad. Sci. 480, 475–486 (1986). 8. D. Deutsch, Proc. Roy. Soc. Lond. A 400, 97–117 (1985); Proc. Roy. Soc. Lond. A 425, 73–90 (1989). 9. R.P. Feynman, Optics News 11, 11-20 (1985); Found. Phys. 16, 507–531 (1986); Int. J. Theor. Phys. 21, 467–488 (1982). 10. W.H. Zurek, Phys. Rev. Lett. 53, 391–394 (1984). 11. A. Peres, Phys. Rev. A 32, 3266–3276 (1985). 12. N. Margolus, Ann. N.Y. Acad. Sci. 480, 487–497 (1986); Complexity, Entropy, and the Physics of Information, Santa Fe Institute Studies in the Sciences of Complexity VIII, W.H. Zurek, ed., 273-288 (1991). 13. L. Allen, J.H. Eberly, Optical Resonance and Two–Level Atoms, Wiley, New York 1975. 14. W.H. Louisell, Quantum Statistical Properties of Radiation, Wiley, New York 1973. 15. S. Wolfram, Rev. Mod. Phys. 55, 601 (1983). 16. A. Einstein, B. Podolsky, N. Rosen, Phys. Rev. 47, 777 (1935). 17. G. Mahler, J.P. Paz, to be published. 18. D.M. Greenberger, M. Horne, A. Zeilinger, in Bell’s Theorem, Quantum Theory, and Conceptions of the Universe, M. Kafatos, ed., Kluwer, Dordrecht, 1989. N.D. Mermin, Phys. Today 43(6), 9 (1990). 19. C.H. Bennett, G. Brassard, S. Breidbart, S. Wiesner, Advances in Cryptology: Proceedings of Crypto ‘82, Plenum, New York, 267-275 (1982). S. Wiesner, Sigact News 15(1), 78-88 (1983). 20. C.E. Shannon, W. Weaver, The Mathematical Theory of Information, University of Illinois Press, Urbana 1949. 21. R.W. Hamming, Coding and Information Theory, Prentice–Hall, Englewood Cliffs, 1986. 22. J. von Neumann, Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components, Lectures delivered at the California Institute of Technology, 1952. 23. W.S. Warren, Science 282, 878-884 (1988). 24. R. Tycko, A. Pines, J. Guckenheimer, J. Chem. Phys. 83(6), 2775-2802 (1985). 25. R.S. Knox, Theory of Excitons, Academic Press, New York 1963. 26. P.W. Anderson, Concepts in Solids, Addison–Wesley, Redwood City 1963. 27. R.H. Dicke, Phys. Rev. 93, 99 (1954).
no-problem/9912/gr-qc9912104.html
ar5iv
text
# References On de Sitter gravitational instability from the spin-torsion primordial fluctuations and COBE data L.C. Garcia de Andrade<sup>1</sup><sup>1</sup>1Departamento de Fisica Teorica,Instituto de Física , UERJ, Rua São francisco Xavier 524, Rio de Janeiro,CEP:20550-013, Brasil.e-mail:garcia@dft.if.uerj.br. Abstract Fluctuations on de Sitter solution of Einstein-Cartan field equations are obtained in terms of the matter density primordial density fluctuations and spin-torsion density and matter density fluctuations obtained from COBE data. Einstein-de Sitter solution is shown to be unstable even in the absence of torsion.The spin-torsion density fluctuation is simply computed from the Einstein-Cartan equations and from COBE data. Recently D.Palle has computed the primordial matter density fluctuations from COBE satellite data.More recently I have been extended Palle’s work to include the dilaton fields .Also more recently I have considered a mixed inflation model with spin-driven inflation and inflaton fields where the spin-torsion density have been obtained from the COBE data on the temperature fluctuations.However in these last attempts no consideration was given on the spinning fluid and torsion was considered just coming from the density of inflaton fields which of course could be considered just in the case of massless neutrinos.Earlier Maroto and Shapiro have discussed the de Sitter metric fluctuations and showed that in the case with dilatons and torsion the higher-order gravity the stability of de Sitter solutions depends on the parametrization and dimension,but that for the given dimension one can always choose parametrization in such a way that the solutions are unstable.In this letter we show that starting from the Einstein-Cartan equations as given in Gasperini for a four-dimension spacetime with spin-torsion density the de Sitter solutions are also unstable for large values of time.Of course one should remind that Maroto-Shapiro solutions does not possess spin but are based on a string type higher order gravity where torsion enters like in Ramond action.Let us start from the Gasperini form of the Einstein-Cartan equations for the spin-torsion density $$H^2=\frac{8\pi G}{3}(\rho 2\pi G\sigma ^2)$$ (1) and $$\dot{H}+H^2=\frac{4\pi G}{3}(\rho +3p8\pi G\sigma ^2)$$ (2) where $`\frac{\ddot{a}}{a}=H(t)`$ where $`H(t)`$ is the Hubble parameter.Following the linear perturbation method in cosmological models as described in Peebles we have $$H(t,r)=H(t)[1+\alpha (r,t)]$$ (3) where $`\alpha =\frac{\delta H}{H}`$ is the de Sitter metric density fluctuations where the Friedmann metric is reads $$ds^2=dt^2a^2(dx^2+dy^2+dz^2)$$ (4) Also the matter density is given by $$\rho (r,t)=\rho (t)[1+\beta (r,t)]$$ (5) and $$\sigma (r,t)=\sigma (t)[1+\gamma (r,t)]$$ (6) whhere $`\beta =\frac{\delta \rho }{\rho }`$ is the matter density fluctuation which is approximately $`10^5`$ as given by COBE data and $`\gamma (r,t)=\frac{\delta \sigma }{\sigma }`$ where $`\sigma `$ is the spin-torsion density.Substitution of these equations into the Gasperini-Einstein -Cartan equations above in the simple case where the pressure $`p`$ vanishes (dust) we obtain $$\dot{\alpha }=\frac{\dot{H}}{H}+\frac{8\pi G}{3H}[\rho \beta 16\pi G\sigma \gamma ]$$ (7) This last equation can be integrated to $$\alpha =1+lnH+8\pi G[\frac{\rho \beta dt}{H}16\pi G\frac{\sigma \gamma dt}{H}]$$ (8) When the variation of mass density and spin density are small with respect to time they can be taken as approximatly constant and we are able to perform the integration for de Sitter metric $`(H=H_0=constant)`$ as $$\alpha =1+lnH_0+\frac{8\pi G}{3H_0}t[\rho _0\beta 16\pi G\sigma _0\gamma ]$$ (9) which shows clearly that the de Sitter solution to Einstein-Cartan equations is unstable and can be computed in terms of the matter and spin-torsion densities.Let us now compute the spin-torsion fluctuation from expressions (1) and (2) and by a simple method presented by Liang and Sachs for the case of Newtonian Cosmology.In the case of a pressureless de sitter phase of the universe $`(p=0)`$ one may equate equations (1) and (2) to obtain $$\rho =\frac{5\pi G}{3}(\sigma ^2)$$ (10) By considering now a radius where the spin and matter density are higher since the model is homogeneous and isotropic we may write $$\rho ^{}=\frac{5\pi G}{3}(\sigma _{}^{}{}_{}{}^{2})$$ (11) and performing the matter density flucuations with these equations one obtains $$\frac{\delta \rho }{\rho }=\frac{\sigma _{}^{}{}_{}{}^{2}\sigma ^2}{\sigma ^2}=\frac{(\sigma ^{}\sigma )(\sigma ^{}+\sigma )}{\sigma ^2}$$ (12) For spin densities closer to one another a simple expression for the spin-torsion density fluctuations may be obtained as $$\frac{\delta \rho }{\rho }=2\frac{\delta \sigma }{\sigma }$$ (13) Thus from the COBE data it is possible to obtain a numerical estimate for the spin-torsion density fluctuation as $$\frac{\delta \sigma }{\sigma }=10^5$$ (14) Indeed the general result for $`\sigma ^{}>>\sigma `$ is given by $$\frac{\delta \sigma }{\sigma }=10^5\frac{\sigma }{\sigma ^{}}$$ (15) and since $`\frac{\sigma }{\sigma ^{}}<<1`$ one may infer that the spin-torsion density fluctuation is much weaker than the matter density fluctuation.The results presented here may motivate some experimentalists to imagine experiments to measure spin-torsion densities in the Universe such as done for COBE. Acknowledgements I would like to thank Professor Ilya Shapiro and Prof.Rudnei Ramos for helpful discussions on the subject of this paper.Financial support from CNPq. is gratefully ackowledged.
no-problem/9912/patt-sol9912007.html
ar5iv
text
# Pattern formation in inclined layer convection ## Abstract We report experiments on thermally driven convection in an inclined layer of large aspect ratio in a fluid of Prandtl number $`\sigma 1`$. We observed a number of new nonlinear, mostly spatio-temporally chaotic, states. At small angles of inclination we found longitudinal rolls, subharmonic oscillations, Busse oscillations, undulation chaos, and crawling rolls. At larger angles, in the vicinity of the transition from buoyancy- to shear-driven instability, we observed drifting transverse rolls, localized bursts, and drifting bimodals. For angles past vertical, when heated from above, we found drifting transverse rolls and switching diamond panes. Rayleigh-Bénard convection (RBC) of a vertical fluid layer heated from below has long served as a paradigm for pattern forming systems . Variations that alter the symmetries, such as rotation around a vertical axis and vertical vibrations continue to lead to important insights. Another case, of particular meteorological and oceanographic interest, is RBC of a fluid layer inclined with respect to gravity. This system is not only well suited for the study of buoyancy and shear flow driven instabilities, but may also serve, along with liquid crystal convection , as a paradigm for anisotropic pattern forming systems. As with RBC, the onset of inclined layer convection (ILC) occurs when the temperature difference, $`\mathrm{\Delta }T`$, across the layer is sufficient for convection rolls to form. The main difference from RBC is that, in ILC, the patternless base state is characterized not only by a linear temperature gradient but also by a symmetry-breaking shear flow. As shown in Fig. 1, the component of gravity tangential to the fluid layer, $`𝐠_{}`$, causes buoyant fluid to flow up along the warm plate and down along the cold plate. For small angles of inclination $`\theta `$, buoyancy dominates over shear flow ($`𝐠_{}`$ large, $`𝐠_{}`$ small), and the primary instability is to longitudinal rolls (LR) whose axes are aligned with the shear flow direction . With increasing $`\theta `$, buoyancy effects decrease and for $`\theta >90^{}`$ buoyancy is stabilizing. Above a critical angle $`\theta _\mathrm{c}`$ ($`90^{}`$) the shear flow causes a primary instability to transverse rolls (TR) with roll-axes perpendicular to the shear flow . The few prior experiments on ILC showed reasonable agreement with the linear theory . These experiments also demonstrated that LR are unstable to some form of undulations , in qualitative agreement with theory , but the quantitative details of the state were inaccessible due to experimental limitations. Here we report the first experimental results on pattern formation in ILC for large aspect-ratio systems in a range of inclination angles $`0^{}\theta 120^{}`$, i.e., from horizontal (heated from below) to past vertical (heated from above). We found many unpredicted states when increasing $`\mathrm{\Delta }T`$ above the critical temperature difference. For $`0^{}\theta 77.5^{}`$ we observed longitudinal rolls, subharmonic oscillations, Busse oscillations, undulation chaos, and crawling rolls. In the neighborhood of the codimension two point for thermal and shear driven instability ($`77.5^{}\theta 84^{}`$), we observed drifting bimodals, drifting transverse rolls, and localized longitudinal and transverse bursts. For inclinations $`\theta 84^{}`$ we found drifting transverse rolls, switching diamond-panes, and longitudinal bursts. Most of these novel states were spatio-temporally chaotic and were found very close to onset, where theoretical progress should be possible. Experiment: Our experimental apparatus consisted of a water-cooled pressure chamber containing a convection cell of diameter 10 cm, subdivided into two large aspect ratio rectangular cells. The experimental design was similar to the one described in . The optically flat upper and lower plate of the convection cell consisted of $`1`$ cm thick single crystal sapphire and single crystal silicon, respectively. The sapphire plate was cooled by a water bath, while the silicon plate was heated by an electric film heater. The convection patterns were visualized by the usual shadowgraph technique . The sidewalls were constructed of 9 layers of notebook paper, providing the best possible thermal matching between cell boundaries and the fluid. As measured interferometrically, the plates were parallel to $`\pm 0.5`$ $`\mu `$m. The convection cell was housed in a pressure chamber, which held both the cooling water and the convecting gas to ($`41.37\pm 0.01`$) bar, regulated to $`\pm 5\times 10^3`$ bar. The temperatures of the two plates were regulated to $`\pm 0.0003^{}`$C. Throughout the experiment the mean temperature was kept constant at $`(27.00\pm 0.05)^{}`$C. We determined the cell height $`d`$ by measuring the pattern wavenumber at onset for $`\theta <60^{}`$ and comparing it with the theoretical value of $`q_c=3.117d`$. We found $`d=(710\pm 2)\mu `$m and $`d`$ = $`(702\pm 2)\mu `$m for two sets of experiments. The two convection cells had a size $`\mathrm{\Gamma }_1(21\times 42)d^2`$ and $`\mathrm{\Gamma }_2(14\times 48)d^2`$. For all data, the Prandtl number was $`\sigma \nu /\kappa =1.07`$ as determined from , with the kinematic viscosity $`\nu `$ and thermal diffusivity $`\kappa `$. The vertical thermal diffusion time was $`\tau _\mathrm{v}d^2/\kappa =3.0`$ s. Inclines from $`0^{}`$ (horizontal) to $`120^{}`$ ($`30^{}`$ past vertical) were possible, with an accuracy of $`\pm 0.02^{}`$. Following we calculated the Boussinesq number $`𝒫(\theta )`$ for the corresponding horizontal layer to estimate non-Boussinesq effects. At $`\mathrm{\Delta }T_\mathrm{c}(\theta )`$ for $`\theta <70^{}`$ we found $`𝒫(\theta )<1.0`$, putting the flow into the Boussinesq regime. For larger angles $`𝒫`$ increased linearly to 3.0 for the largest temperature differences investigated. We observed the same convective patterns in both convection cells. Onset of convection: In ILC, the forward bifurcation to LR is predicted to occur at the critical Rayleigh number $`R_\mathrm{c}(\theta )=R_\mathrm{c}^t(0^{})/\mathrm{cos}\theta `$ where $`R_\mathrm{c}^t(0^{})=1708=\alpha d^3g\mathrm{\Delta }T_\mathrm{c}/\kappa \nu `$ ($`\alpha `$ is the thermal expansion coefficient and $`\mathrm{\Delta }T_\mathrm{c}`$ is the critical temperature difference). The threshold for the forward bifurcation to shear driven TR at large inclination angle is more complicated, and can only be determined numerically . We determined $`\mathrm{\Delta }T_\mathrm{c}`$ for convection by quasi-statically increasing $`\mathrm{\Delta }T`$ in steps of 1 mK every $`20`$ minutes past the point where convection was observable and then decreasing the temperature difference similarly. For all angles we observed forward bifurcations. Figure 2 shows the measured $`R_\mathrm{c}(\theta )`$, as well as the theoretically predicted onsets for both the buoyancy-driven (longitudinal) and the shear-driven (transverse) instabilities . We found agreement with theory for the onset of LR: the experimentally observed value was $`R_\mathrm{c}(\theta )=R_\mathrm{c}^e(0^{})/\mathrm{cos}\theta `$ with $`R_\mathrm{c}^e(0^{})=1687\pm 24`$. We did not, however, observe the theoretically predicted stationary TR, but instead drifting TR (DTR) at a slightly larger critical Rayleigh number. The drift down the incline may be attributed to the broken symmetry across the layer which is caused by the temperature dependence of the fluid parameters (non-Boussinesq effects). Very interesting is the vicinity of the theoretically predicted codimension two point at $`\theta _\mathrm{c}=77.76^{}`$, where LR and TR have the same onset value. Experimentally, we found a forward bifurcation to DTR above $`\theta _\mathrm{c}=(77.5\pm 0.05)^{}`$, and in the range $`77.5^{}\theta 84^{}`$ DTR lost stability to drifting bimodals (DB) above $`ϵ0.001`$. As shown in Fig. 3, DB consist of a superposition of LR and DTR. Here $`ϵ(\mathrm{\Delta }T\mathrm{\Delta }T_\mathrm{c}(\theta ))/\mathrm{\Delta }T_\mathrm{c}(\theta )`$ is the reduced control parameter. Theoretically, Fujimura and Kelly predicted a forward bifurcation to transverse rolls, which lose stability to bimodals at $`ϵ0.001`$ in a narrow angular region. We find good agreement with these predictions, but with the difference that the experimentally observed patterns are drifting. Nonlinear states: Figure 4 shows the measured phase boundaries for the ten observed nonlinear convective states. At low angles ($`\theta <13^{}`$), LR are stable up to $`ϵ1`$ above which the novel state of subharmonic oscillations (SO) sets in. These oscillations are characterized by a pearl-necklace-like pattern of bright (cold) spots that travel along a standing wave pattern of wavy rolls. As shown in Fig. 5, these oscillations appear in patches whose location changes in time. Typical frequencies of the oscillations were measured to be 1 to 3 cycles per $`\tau _\mathrm{v}`$. A recent theoretical analysis has shown agreement with this value . With further increase in $`ϵ`$, localized patches of traveling oscillations burst intermittently. Within $`𝒪(\tau _\mathrm{v})`$ the amplitude of the rolls’ waviness increases, the pattern tears transverse to the rolls as shown in the upper left corner of Fig. 5, and fades away leaving an almost parallel roll state. For $`\theta 10^{}`$ and $`ϵ4`$, we observed patches of the well-known Busse oscillations (BO) coexisting with patches of the SO. As shown by the dotted line in Fig. 4, our data for the onset of the BO agrees well with the theoretical prediction calculated for $`\sigma =0.7`$ . It is surprising, however, that both oscillations (SO and BO) coexist as localized patches in the same cell. At intermediate angles ($`25^{}<\theta <70^{}`$), where the initial instability is to LR we found with increasing $`ϵ`$ that LR were unstable to undulations. Although the experimentally determined value for the instability $`ϵ0.01`$ agrees well with the theoretical prediction (see Fig. 4) , we did not observe a stationary pattern of undulations, but a defect-turbulent state of undulation chaos (UC). A snapshot of UC is shown in Fig. 6a. At $`ϵ0.11`$, the UC begins to “twitch” in the direction transverse to the rolls on time scales $`𝒪(\tau _\mathrm{v})`$. With increasing $`ϵ`$, the amplitude of the twitching increases and the rolls eventually tear, with the ends “crawling” in the direction transverse to the original rolls. A snapshot of crawling rolls (CR) is shown in Fig. 6b. In the vicinity of the codimension-two point, at $`\theta _\mathrm{c}`$, we observed drifting bimodals quite close to onset. As shown in Fig. 4, for small angles the existence region of the pure DB is limited by localized transverse bursts (TB), while for large angles by DTR. A snapshot of transverse bursts and the evolution of a single burst is shown in Fig. 7. In this region of phase space the LR occur in patches that grow and decay intermittently while TB nucleate in high amplitude LR-regions. As shown in the time series in Fig. 7, TB grow over the period of a few $`\tau _\mathrm{v}`$ and then decay rapidly. Above $`ϵ0.8`$ the DB are unstable to localized longitudinal bursts (LB) as shown in Fig. 8a. As shown in Fig. 8b–j, a few longitudinal rolls grow locally to large amplitude and then quickly fade. With both types of bursts, the bursts increase both in density and frequency when $`ϵ`$ is increased, eventually developing into a turbulent state at $`ϵ1`$. Past $`90^{}`$, we continued to observe shear driven convection patterns. DTR are the primary instability; however, they are unstable to switching diamond panes (SDP) at $`ϵ0.07`$. The state is characterized by spatio-temporally chaotic switching on time-scales of $`𝒪(\tau _\mathrm{v})`$ from $`+45^{}`$ to $`45^{}`$ of large amplitude regions of DTR, as seen in Fig. 9a. At $`ϵ0.1`$ SDP are unstable to LB, which in this region of phase space are denser but travel less distance than in TR, as shown in Fig. 9b. Conclusion: Inclined layer convection in the weakly nonlinear regime displays a rich phase diagram, with ten different states accessible over the range of parameters investigated. The phase space naturally divides into several regions of characteristic behavior which have so far been characterized semi-quantitatively. All states but longitudinal and transverse rolls are spatio-temporally chaotic. Most instabilities occurred very close to onset and further theoretical description should be possible. Especially interesting is the bursting behavior, which may be related to turbulent bursts in other shear flows . We thank F. H. Busse and W. Pesch for important discussions on the stability curves and theoretical descriptions of various states. E.B. acknowledges the kind hospitality of Prof H. Levine at the University of California at San Diego where part of this manuscript was prepared. We gratefully acknowledge support from the NSF under grant DMR-9705410.
no-problem/9912/astro-ph9912492.html
ar5iv
text
# A Possible Explanation for the Radio Afterglow of GRB980519: The Dense Medium Effect ## 1 Introduction In the standard model of gamma-ray bursts (GRBs) (see Piran 1999 for a review), an afterglow is generally believed to be produced by the synchrotron radiation or inverse Compton scattering of the shock-accelerated electrons in an ultra-relativistic shock wave expanding in a homogeneous medium. As more and more ambient matter is swept up, the shock gradually decelerates while the emission from such a shock fades down, dominating at the beginning in X-rays and progressively at the optical to radio energy bands (Mészáros & Rees 1997; Waxman 1997a; Wijers & Galama 1999). In general, the light curves of X-ray and optical afterglows are expected to exhibit power-law decays (i.e. $`F_\nu t^\alpha `$) with the temporal index $`\alpha `$ in the range $`1.11.4`$, given the energy spectral index of electrons $`p23`$. The observations of the earliest afterglows are in good agreement with this simple model (e.g. Wijers, Rees & Mészáros 1997; Waxman 1997b). However, over the past year, we have come to recognize a class of GRBs whose afterglows showed light curve breaks ( e.g. GRB 990123, GRB 990510; Kulkarni et al. 1999a; Harrison et al. 1999) or steeper temporal decays (i.e. $`F_\nu t^2`$; e.g. GRB 980519, GRB 980326; Bloom et al. 1999). Explanations for these behaviors include three scenarios: 1) a jet-like relativistic shock has undergone the transition from the spherical-like phase to a sideways-expansion phase (Rhoads 1999), as suggested by some authors (e.g. Sari, Piran & Halpern 1999; Kulkarni et al. 1999a; Harrison et al. 1999). 2) the shock wave propagates in a wind-shaped circumburst environment with the number density $`nr^2`$ ( Dai & Lu 1998; Mészáros, Rees & Wijers 1998; Chevalier & Li 1999; Chevalier & Li 2000; Li & Chevalier 1999 ); 3) a dense medium environment ($`n10^510^6\mathrm{cm}^3`$) makes the shock wave evolve into the sub-relativistic phase after a short relativistic one (Dai & Lu 1999a,b). In the last model, since an afterglow from the shock at the sub-relativistic stage decays more rapidly than at the relativistic one, we will expect a light curve break or a long-term steeper decay, depending on the time when it begins to enter into the sub-relativistic stage. This scenario has reasonably interpreted the break in the R-band afterglow of GRB 990123 (Dai & Lu 1999a) and the steep decays of the X-ray and optical afterglows of GRB 980519 (Dai & Lu 1999b). Recently, Frail et al. (1999a) tried to test the first two models (the jet and wind cases) by means of the radio afterglow behavior of GRB 980519 and found that the wind model described it rather well. Due to the strong modulation of the light curve, however, they could not draw a decisive conclusion for the jet case. In this paper, we will examine the possibility of describing the evolution of the radio afterglow in terms of the dense medium model. Since this scenario involves the transition phase of the shock wave from the relativistic stage to the sub-relativistic, we have considered the trans-relativistic shock hydrodynamics in the numerical study. We first present the asymptotic result of the fitting of the radio data in section 2, and then the numerical result in section 3. In section 4, we show that the optical extinction due to the dense circumburst medium is not important, since the prompt optical-UV radiation, caused by the reverse shock, can destroy the dust by sublimation out to a substantial distance, as proposed by Waxman & Draine (1999). Finally, we give our discussions and conclusions. ## 2 Asymptotic behavior of the radio afterglow in the sub-relativistic stage GRB 980519 is the second brightest GRB in the BeppoSAX sample. Its optical afterglow measured since $`8.5`$ hours after the burst exhibited rapid fading, consistent with $`t^{2.05\pm 0.04}`$ in $`BVRI`$ (Halpern et al. 1999; Djorgovski et al. 1998), and the power-law decay slope of the X-ray afterglow, $`\alpha _X=2.07\pm 0.11`$ (Owens et al. 1998), was in agreement with the optical. The spectrum in optical band alone is well fitted by a power-law $`\nu ^{1.20\pm 0.25}`$, while the optical to X-ray spectrum can also be fitted by a single power-law $`\nu ^{1.05\pm 0.10}`$. The radio emission was observed with the Very Large Array (VLA) (Frail et al. 1999a) since about 7.2 hours after the burst and referred as VLA J232221.5+771543. The radio light curve shows a gradual rise to a plateau followed by a decline until below detectability after about 60 days. There are some large variations in these data, which is believed to be caused by interstellar scattering and scintillation (ISS; Frail et al. 1999a). As discussed by Dai & Lu (1999b), the steep decays of the X-ray and optical afterglows of GRB 980519 can be attributed to the shock evolution into the sub-relativistic phase 8 hours after the burst as the result of the dense circumburst medium. During such a sub-relativistic expansion phase, the hydrodynamics of the shocked shell is described by the self-similar Sedov-von Neumann-Taylor solution. The shell radius and its velocity scale with time as $`rt_{}^{2/5}`$ and $`\beta t_{}^{3/5}`$, where $`t_{}`$ denotes the time measured in the observer frame. Then, we obtain the synchrotron peak frequency $`\nu _mt_{}^3`$, the cooling frequency $`\nu _ct_{}^{1/5}`$, the peak flux $`F_{\nu _m}t_{}^{3/5}`$ and the self-absorption frequency $`\nu _at_{}^{\frac{3p2}{p+4}}`$ for the case of $`\nu _a>\nu _m`$(Dai & Lu 1999b). Now, the derived spectra and light curves are $$F_\nu =\{\begin{array}{cccccc}\left(\nu _a/\nu _m\right)^{\left(p1\right)/2}\left(\nu /\nu _a\right)^{5/2}F_{\nu _m}\nu ^{5/2}t_{}^{11/10},\hfill & & & & & \\ \mathrm{if}\nu _m<\nu <\nu _a;\hfill & & & & & \\ \left(\nu /\nu _m\right)^{\left(p1\right)/2}F_{\nu _m}\nu ^{\left(p1\right)/2}t_{}^{\left(2115p\right)/10},\hfill & & & & & \\ \mathrm{if}\nu _a<\nu <\nu _c;\hfill & & & & & \\ \left(\nu _c/\nu _m\right)^{\left(p1\right)/2}\left(\nu /\nu _c\right)^{p/2}F_{\nu _m}\nu ^{p/2}t_{}^{\left(43p\right)/2},\hfill & & & & & \\ \mathrm{if}\nu >\nu _c.\hfill & & & & & \end{array}$$ (1) If the observed optical afterglow was emitted by slow-cooling electrons while the X-ray afterglow from fast-cooling electrons and if $`p2.8`$, then according to Eq.(1), the decay index $`\alpha _R=(2115p)/102.1`$ and $`\alpha _X=(43p)/22.2`$, in excellent agreement with observations. Also, the model spectral index at the optical to X-ray band, $`\beta =(p1)/20.9`$ is quite consistent with the observed one $`1.05\pm 0.10`$. Furthermore, from the information of X-ray and optical afterglows, Dai & Lu (1999b) have inferred the physical parameters of this burst as follows: $$\begin{array}{cc}E0.3\times 10^{52}\mathrm{erg},ϵ_e0.16,ϵ_B2.8\times 10^4,\hfill & \\ n3\times 10^5\mathrm{cm}^3,z0.55,\hfill & \end{array}$$ (2) where $`E`$ is shock energy, $`z`$ is the redshift of the burst and $`ϵ_e`$ and $`ϵ_B`$ are the electron and magnetic energy fractions of the shocked medium, respectively. After the 60-day radio observational data being published, we promptly checked the dense medium model, and found that the asymptotic analysis can approximately describe the radio behavior. The analysis is as follows: adopting the inferred values of the physical parameters in Eq.(2), the detected frequency $`\nu _{}=8.46\mathrm{GHz}`$ equals to $`\nu _a`$ at about day 12; thus, according to Eq.(1), we expect that before this time the radio emissions rise as $`t_{}^{1.1}`$ and then decay as $`t_{}^{2.1}`$ after the self-absorption frequency $`\nu _a`$ falls below $`\nu _{}`$. This simple asymptotic solution agrees qualitatively with observations, as showed in Fig. 1 by the dotted line This preliminary analysis stimulated us to fit the radio data with a more detailed model by taking into account the trans-relativistic shock hydrodynamics and the strict self-absorption effects of the synchrotron radiation. ## 3 Trans-relativistic shock hydrodynamics, self-absorption effect and the fitting of the radio data We consider an instantaneous release of a large amount of energy $`E`$ in a constant density external medium. The energy released drives in the medium a shock wave, whose dynamic evolution from the relativistic to sub-relativistic phase can be described approximately in the following way. Let $`r`$ be the shock radius, $`\gamma `$ and $`\mathrm{\Gamma }`$ be, respectively, the Lorentz factors of the shell and the shock front, $`\beta `$ be the velocity of the shock front. As usual, the shock expansion is assumed to be adiabatic, during which the energy is conserved, and we have (Blandford & McKee 1976) $$\frac{4}{3}\pi \sigma \beta ^2\mathrm{\Gamma }^2r^3nm_pc^2=E,$$ (3) where $`\sigma `$ is a coefficient: $`\sigma 0.35`$ when $`\beta 1`$ and $`\sigma 0.73`$ when $`\beta 0`$. As Huang, Dai & Lu (1998), we use an approximate expression for $`\sigma `$: $`\sigma =0.730.38\beta `$. The radius of the shock wave evolves as (Huang, Dai & Lu 1998) $$\frac{dr}{dt_{}}=\beta c\gamma \left(\gamma +\sqrt{\gamma ^21}\right)/\left(1+z\right).$$ (4) and the jump conditions of the shock are given by (Blandford & McKee 1976) $$n^{}=\frac{\widehat{\gamma }\gamma +1}{\gamma 1}n,e^{}=\frac{\widehat{\gamma }\gamma +1}{\widehat{\gamma }1}\left(\gamma 1\right)nm_pc^2,$$ (5) $$\mathrm{\Gamma }^2=\frac{\left(\gamma +1\right)\left[\widehat{\gamma }\left(\gamma 1\right)+1\right]^2}{\widehat{\gamma }\left(2\widehat{\gamma }\right)\left(\gamma 1\right)+2},$$ (6) where $`e^{}`$ and $`n^{}`$ are the energy and the number densities of the shell in its comoving frame and $`\widehat{\gamma }`$ is the adiabatic index, which equals $`4/3`$ for ultra-relativistic shocks and $`5/3`$ for sub-relativistic shocks. A simple interpolation between these two limits $`\widehat{\gamma }=\frac{4\gamma +1}{3\gamma }`$ gives a valid approximation for trans-relativistic shocks (Dai, Huang & Lu 1999). Using the above equations, we can now numerically obtain the evolution of $`r\left(t_{}\right)`$ and $`\gamma \left(t_{}\right)`$ in the trans-relativistic stage, given proper initial conditions. As usual, we assume that the distribution of relativistic electrons with the Lorentz factor $`\gamma _e`$ takes a power-law form with the number density given by $`n\left(\gamma _e\right)d\gamma _e=C\gamma _{e}^{}{}_{}{}^{p}d\gamma _e`$ above a low limit $`\gamma _{min}`$, which is determined by the shock velocity: $`\gamma _{min}=ϵ_e\frac{\left(p2\right)}{\left(p1\right)}\frac{m_p}{m_e}\left(\gamma 1\right)`$. Also, the energy densities of electrons and magnetic fields are assumed to be proportional to the total energy density $`e^{}`$ in the comving frame as $`U_e^{}=ϵ_ee^{}`$ and $`B_{}^{}=\left(8\pi ϵ_Be^{}\right)^{1/2}`$. Thus, from the standard theory of the synchrotron radiation (Rybicki & Lightman 1979; Li & Chevalier 1999), we have the expressions of the effective optical depth and the self-absorbed flux $$\tau _\nu ^{}=\frac{p+2}{8\pi m\nu _{}^{}{}_{}{}^{2}}\frac{\sqrt{3}q^3}{2mc^2}\left(\frac{4\pi mc\nu ^{}}{3q}\right)^{p/2}F_2\left(\frac{\nu ^{}}{\nu _m^{}}\right)CB_{}^{}{}_{}{}^{\left(p+2\right)/2}\mathrm{\Delta }r^{},$$ (7) $$\nu _m^{}=\frac{3\gamma _{min}^2qB_{}^{}}{4\pi mc},C=\left(p1\right)n^{}\gamma _{min}^{p1},\mathrm{\Delta }r^{}=r/\eta ,$$ (8) $$F_\nu =\left(1+z\right)D^3\pi \left(\frac{r}{d_L}\right)^2\frac{2m\nu _{}^{}{}_{}{}^{2}}{p+2}\left(\frac{4\pi mc\nu ^{}}{3qB_{}^{}}\right)^{1/2}\frac{F_1\left(\nu ^{}/\nu _m^{}\right)}{F_2\left(\nu ^{}/\nu _m^{}\right)}\left(1e^\tau \right),$$ (9) where $`F_1\left(x\right)`$, $`F_2\left(x\right)`$ are defined by Eq.(5) in Li & Chevalier (1999), $`m`$ and $`q`$ denote the mass and charge of the electron, $`d_L`$ is the luminosity distance of the burst, assuming a flat Friedman universe with $`H_0=65\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ and $`\eta 10`$, characterizing the width of the shock shell. Here $`D1/\left[\gamma \left(1\beta \right)\right]`$ describes the relativistic effect, and $`\nu `$ relates to the corresponding frequency $`\nu ^{}`$ in the comoving frame by $`\nu =D\nu ^{}/\left(1+z\right)`$. Using the above full set of equations, we computed the radio flux at the frequency $`\nu _{}=8.46\mathrm{GHz}`$ and plotted the model fit in Fig. 1 as the solid line. We find that the following combination of the parameters fits almost all valid data rather well: $`E0.8\times 10^{52}\mathrm{erg}`$, $`ϵ_e0.2`$, $`ϵ_B1\times 10^4`$, $`n1\times 10^5\mathrm{cm}^3`$, $`z0.55`$ and the Lorentz factor at the initial time $`t_{}=1/3\mathrm{days}`$, $`\gamma =1.2`$ ($`\beta 0.55`$). We stress that these parameters are in excellent agreement with those inferred independently from the X-ray and optical afterglows (Dai & Lu 1999b), as listed in Eq.(2). Clearly, there are some large amplitude variations in the observed light curve (e.g. about 18 days after the burst), which is believed to be caused by diffractive scintillation. Also plotted in Fig. 1 is the fit (dashed line) computed with the sub-relativistic model as presented in the appendix of Frail, Waxman & Kulkarni (1999). The fit with this model was obtained by adopting the initial conditions as: $`t_0=1/3\mathrm{days}`$ and $`r_0=1.6\times 10^{16}\mathrm{cm}`$ and the parameter values ($`ϵ_e`$, $`ϵ_B`$, $`E`$, $`z`$ and $`n`$ ) the same as those in Dai & Lu (1999b). Comparing the trans-relativistic model with sub-relativistic one, we can easily see that the relativistic effect (as characterized by $`D1/\gamma \left(1\beta \right)`$ in Eq.(9)) flattens the rising phase at earlier time, making the trans-relativistic model agree better with the observations, while at the later time both the fitting curves trend towards the asymptotic solution (i.e. $`F_\nu t_{}^{2.1}`$). ## 4 Dust sublimation and optical extinction by dense medium One may ask whether the dense circumburst medium may cause large extinction in the optical afterglow of GRB 980519. A crude estimate is as follows. At the time that the blast wave transits to the sub-relativistic stage ($`t_{}\frac{1}{3}\mathrm{days}`$, $`\beta 0.55`$, $`\gamma 1.2`$), from Eq.(3) we derived its radius to be $`r2.1\times 10^{16}\mathrm{cm}`$. Therefore the characteristic column density through the medium into which the blast wave is expanding is about $`nr2.1\times 10^{21}\mathrm{cm}^2`$ with the corresponding $`A_V`$ of 1.3 magnitudes in the rest frame of the absorber. This column density is comparable to (but slightly larger than) the Galactic 21cm column density ($`1.74\times 10^{21}\mathrm{cm}^2`$, Halpern et al. 1999) in the direction of the GRB 980519. With $`A_\lambda `$ scales linearly with $`\frac{1}{\lambda }`$, the absorption in our observed B band for this optical transient is about $`1.3\left(1+z\right)2`$ magnitudes, twice the value adopted by Halpern et al. (1999) for correction of the relative extinction. In the above estimate, we have made a questionable assumption, i.e. the dense medium around the burst has a standard gas-to-dust ratio. However, this may be not realistic, considering that the dust around the burst can be destroyed due to sublimation out to an appreciable distance ($``$ a few pc) by the prompt optical-UV flash (Waxman & Draine 1999; hereafter WG99), accompanying the prompt burst. Below we will give an estimation of the destruction radius for dense medium case, following WG99. The prompt optical flash detected accompanying GRB 990123 (Akerlof et al. 1999) suggests that, at least for some GRBs, $`\gamma `$-ray emission is accompanied by prompt optical-UV radiation with luminosity in the 1-7.5eV range of the order of $`10^{49}\left(\frac{\mathrm{\Delta }\mathrm{\Omega }}{4\pi }\right)\mathrm{erg}/\mathrm{s}`$ for typical GRB parameters, where $`\mathrm{\Delta }\mathrm{\Omega }`$ is the solid angle into which $`\gamma `$-ray and optical-UV emission is beamed (WG99). The most natural explanation of this flash is emission from a reverse shock propagating into the fireball ejecta shortly after it interacts with the surrounding gas (Sari & Piran 1999; Mészáros & Rees 1999). As for GRB 980519 with the parameter values as $`ϵ_e0.2`$, $`ϵ_B10^4`$, $`n10^5\mathrm{cm}^3`$, $`E8\times 10^{51}\mathrm{erg}`$ and the burst duration $`\mathrm{\Delta }t70\mathrm{s}`$, we derive the luminosity in the 1-7.5 eV range to be about $`L_{17.5}5\times 10^{48}\mathrm{erg}/\mathrm{s}`$ (Here we have assumed that electron and magnetic field energy fractions in the reverse shock are similar to those in the forward shock; Wang, Dai & Lu 1999c). The condition for the grain to be completely sublimed during the prompt flash time is $$T>T_c2300\mathrm{K}\left[1+0.033\mathrm{ln}\left(\frac{a_5}{\mathrm{\Delta }t/10\mathrm{s}}\right)\right],$$ (10) where $`T`$ is the grain temperature, determined by Eq.(8) of WG99, and $`aa_5\times 10^5\mathrm{cm}`$ is the radius of the dust grain. Then, according to Eq.(17) of WG99, the radius out to which the prompt flash can heat grains to the critical temperature $`T_c`$ is $$R_c3.7\times 10^{19}\left(\frac{Q_{UV}L_{49}\left(1+0.1a_5\right)}{a_5}\right)^{1/2}\mathrm{cm}2.7\times 10^{19}\mathrm{cm},$$ (11) where $`Q_{UV}`$ is the absorption efficiency factor of the optical-UV flash and can be assumed to be near one for grain radii $`a>10^5`$ expected in dense medium. Thus, we can safely say that the extinction due to the circumburst dense medium is not important if the size of the dense medium is shorter than $`R_c`$ and we think that this condition is reasonable for GRB 980519, and also for GRB 990123 (Dai & Lu 1999a). ## 5 Discussions and Conclusions The detection of strong diffractive scintillation requires that the angular size of the source VLA J232221.5+771543 should be less than $`1\mu \mathrm{arcsec}`$ even 15 days after the burst (Frail et al. 1999a), otherwise the fluctuations would be suppressed. This small inferred size is not consistent with the spherical, homogeneous model with a normal density ($`n1\mathrm{c}\mathrm{m}^3`$), but can marginally be consistent with the jet model and the wind-shaped circumburst medium model. We note that the dense medium model may have a potential advantage for this requirement, simply because that the shock will be quickly decelerated to a sub-relativistic velocity and therefore have a shorter shock radius. This can be clearly seen from the following comparison: $`\theta _{s,rel}_0^t_{}2\gamma ^2c𝑑t\left(1/\gamma \left(t_{}\right)\right)`$ for the relativistic case while $`\theta _{s,sub}_0^t_{}\beta c𝑑t\left(\beta <1\right)`$ for the sub-relativistic one. Here, for the afterglow of GRB 980519, we assume that before $`t_{}1/3\mathrm{d}\mathrm{a}\mathrm{y}\mathrm{s}`$, the shock is adiabatic and relativistic; thus the shock radius is $`r\left(t\right)\left(17Et_{}/4\pi m_pnc\left(1+z\right)\right)^{1/4}0.78\times 10^{16}\left(E/3\times 10^{51}\mathrm{erg}\right)^{1/4}\left(n/3\times 10^5\mathrm{cm}^3\right)^{1/4}\left(t_{}/\frac{1}{3}\mathrm{days}\right)^{1/4}\left[\left(1+z\right)/1.55\right]^{1/4}\mathrm{cm}`$ (Sari, Piran & Narayan 1998). Then, the shock radius should follow the Sedov-von Neumann-Taylor self-similar solution as $`r\left(t_{}\right)t_{}^{2/5}`$. Thus, we obtained the angular size of the afterglow $`\theta _s0.8\mu \mathrm{arcsec}\left(t_{}/15\mathrm{d}\mathrm{a}\mathrm{y}\mathrm{s}\right)^{2/5}`$. The agreement would be even improved if we note that at the beginning of the assumed sub-relativistic stage, the radius should increase more slowly than the self-similar solution in the trans-relativistic regime. The strong modulations caused by scintillation also make the estimate of the spectral slope in the radio band less accurate. The averaging value of the spectral slope from day 12 on is $`\beta 0.45\pm 0.6`$ (where $`F_\nu \nu ^\beta `$), implying that the time averaged self-absorption frequency $`\nu _a`$ is between 1.43 GHz and 4.86 GHz (Frai et al. 1999b). In our model, the time ($`t_{}12\mathrm{d}\mathrm{a}\mathrm{y}\mathrm{s}`$) when the fitting curve begins to decline corresponds to $`\nu _a=\nu _{}=8.46\mathrm{GHz}`$. Since the self-absorption frequency decays as $`\nu _at_{}^{}{}_{}{}^{\left(3p2\right)/\left(p+4\right)}t_{}^{0.94}`$ for $`p=2.8`$, we expect that $`\nu _a`$ shifted quickly below 4.86 GHz at day 21, but was above 1.43 GHz over all the detecting time, which is in reasonable agreement with the observations. The radio afterglow of GRB 970508, the longest light curve ($`450`$-day) obtained by far, exhibited different behavior from GRB 980519 (Frail et al. 1997; Waxman, Kulkarni & Frail 1998; Frail, Waxman & Kulkarni 1999). From the spectral and temporal radio behavoir, Frail, Waxman & Kulkarni (1999) inferred that the fireball has undergone a transition to sub-relativistic expansion at $`t100\mathrm{days}`$, consistent with the inferred low ambient density $`n1\mathrm{c}\mathrm{m}^3`$ (but also see Chevalier & Li 1999a). On the other hand, some radio afterglows (e.g. GRB 990510, GRB 981226; Frail et al. 1999b) show similar behaviors to GRB 980519, that is, they exhibit a slow rise to the maximum for a relatively short time and then a fast decline until below detectability. It is likely that the shocks of these bursts entered into the sub-relativistic stage after a short relativistic one and our above model can also describe their radio afterglows. Harrison et al. (1999) had interpreted the broad-band lightcurve break in the afterglows of GRB 990510 as due to a jet-like outflow. We speculate that another possible explanation is that the shock had entered into the sub-relativistic stage after $`1\mathrm{d}\mathrm{a}\mathrm{y}\mathrm{s}`$ as the result of the combination of the dense medium and jet effects (Wang et al. 1999b), the latter of which may be real in consideration of the large inferred isotropic energy. The radio afterglow of GRB 990123 is unique for its “flare” behavior (Kulkarni et al. 1999b), whose most natural explanation is that it arises from the reverse shock, as evidenced by the prompt optical flash (Sari & Piran 1999). Our preliminary computation (using the trans-relativistic model) shows that the radio emission from the forward shock in the dense medium model is significantly lower than that from the reverse shock and declines quickly after the peak time, if a jet-like outflow with an opening angle $`\theta 0.2`$, as required by the “energy crisis” of this burst, is invoked (Wang et al. 1999b). Moreover, the fast decline of the radio emission from the forward shock, which is caused by the deceleration of the shock in the sub-relativistic stage, can be consistent with the non-detection even 3 days after the burst. In summary, we argue that the dense medium model, which has interpreted the optical to X-ray afterglows of GRB 980519 quite well, can also account for the radio afterglow excellently. The circumburst environment can affect the evolution of GRBs afterglows significantly (Mészáros, Rees & Wijers 1998; Panaitescu, Meszaros & Rees 1998; Wang et al. 1999a). For a low ($`n1\mathrm{c}\mathrm{m}^3`$), homogeneous density environment, the shock waves stay at the relativistic shock stage for quite a long time, while for the dense medium case, the shock wave quickly enters into the sub-relativistic stage. Recently, a generic dynamic model for the evolution of shocks from ultra-relativistic phase to the sub-relativistic one has been also developed by Huang et al. (1999a). The afterglows of the optically thin radiation (e.g. optical and X-rays) from the shock at the sub-relativistic stage decays more rapidly than at the relativistic one. As for the radio afterglow (usually $`\nu _m<\nu _a`$ at the sub-relativistic stage for this model), the dense medium model predicts a slow rise ($`\nu _{}<\nu _a`$), followed by a round peak and a late steep decline ($`\nu _{}>\nu _a`$), trending towards the behavior of the optical and X-ray afterglows. Clearly, this behavior is different from the jet model in the early epoch. But it is somewhat similar to the wind model, making it difficult to distinguish between them through the radio observations. ## Acknowledgments We would like to thank the referee Dr. R. Wijers for his valuable suggestions and improvment on this manuscript. X. Y. Wang also thank Dr. Y.F. Huang for helpful discussions. This work was supported by the National Natural Science Foundation of China under grants 19773007 and 19825109 and the Foundation of the Ministry of Education of China..
no-problem/9912/hep-ph9912487.html
ar5iv
text
# 1 Introduction ## 1 Introduction In the pursuit to understand diffraction in strong interactions it is sensible to focus on those rapidity gap processes that we are in principle able to compute reliably, i.e. using QCD perturbation theory. Of all such processes, diffractive high $`p_t`$ photon production stands out as the one most accessible to perturbation theory. The process can already be studied at HERA via $`\gamma p\gamma X`$ where the final state photon is well separated in rapidity from the debris of the proton, $`X`$ . Note that it is not necessary to measure the system $`X`$ and that this enhances the reach in rapidity significantly. We shall show that the equivalent process can be studied at proposed future high energy $`e^+e^{}`$ and $`\gamma \gamma `$ colliders with much higher event rates. Note that the process $`\gamma \gamma \gamma \gamma `$ is in principle also possible. However, the rates for this process are probably too low even for a future linear collider and we do not consider it further. We also remark that the current LEP collider is not able to measure either of these processes due to insufficient luminosity. For a much more detailed account of most of the results presented here, we refer to . Theoretical interest in the process $`\gamma \gamma \gamma X`$ dates back to the work of where calculations were performed in fixed order perturbation theory and to lowest order in $`\alpha _s`$. Recent work has extended this calculation to sum all leading logarithms in energy, for real incoming photons and for real and virtual incoming photons . The cross-section for relevant hard subprocess, $`\gamma q\gamma q`$, can be written $$\frac{d\sigma _{\gamma q}}{dp_t^2}\frac{1}{16\pi \widehat{s}^2}\left|A_{++}\right|^2$$ (1) and we have ignored a small contribution that flips the helicity of the incoming photon. The photon-quark CM energy is given by $`\widehat{s}`$. To leading logarithmic accuracy $$A_{++}=i\alpha \alpha _s^2\underset{q}{}e_q^2\frac{\pi }{6}\frac{\widehat{s}}{p_t^2}_{\mathrm{}}^{\mathrm{}}\frac{d\nu }{1+\nu ^2}\frac{\nu ^2}{(\nu ^2+1/4)^2}\frac{\mathrm{tanh}\pi \nu }{\pi \nu }F(\nu )\mathrm{e}^{z_0\chi (\nu )}$$ (2) where $$z_0\frac{3\alpha _s}{\pi }\mathrm{log}\frac{\widehat{s}}{p_t^2},$$ (3) $`\chi (\nu )=2(\mathrm{\Psi }(1)\mathrm{Re}\mathrm{\Psi }(1/2+i\nu ))`$ is the BFKL eigenfunction , $`F(\nu )=2(11+12\nu ^2)`$ for on-shell photons and there is a sum over the quark charges squared, $`e_q^2`$. The full photon-photon cross-section is obtained after multiplying by the photon parton density functions: $$\frac{d\sigma _{\gamma \gamma }}{dxdp_t^2}=\left[\frac{81}{16}g(x,\mu )+\mathrm{\Sigma }(x,\mu )\right]\frac{d\sigma _{\gamma q}}{dp_t^2}$$ (4) and we take the factorization scale $`\mu =p_t`$. Throughout we take $`\alpha _s=0.2`$, as indicated by HERA and Tevatron data . The larger $`z_0`$, the larger the rapidity gap between the outgoing photon and the system $`X`$. Since $$z_0\frac{3\alpha _s}{\pi }\mathrm{\Delta }\eta .$$ For $`z_0\stackrel{>}{}0.5`$ one begins to access the most interesting Regge region . ## 2 Results: $`𝒆^\mathbf{+}𝒆^{\mathbf{}}`$ mode Since it is not possible to measure this process at LEP, we turn our attention immediately to a future linear collider operating in $`e^+e^{}`$ mode and at $`\sqrt{s}=500`$ GeV and $`\sqrt{s}=1`$ TeV. We impose three different cuts on the minimum angle ($`\theta _\gamma `$) of the emitted photon and make a cut to ensure that its energy is above 5 GeV. Our results are insensitive to this photon energy cut since we also impose a cut on the subprocess centre-of-mass energy: $`\widehat{s}>10^3`$ GeV<sup>2</sup>. This cut is not easy to implement experimentally, but is essentially related to the size of the final state rapidity gap. Making this cut ensures that $`z_0`$ is large, i.e. the photon and the system $`X`$ are well separated. We also make an anti-tag cut on the outoing electron and positron, i.e. we insist that they be emitted at angles less than 100 mrad. When quoting event rates, we assume a total integrated luminosity of 50 fb<sup>-1</sup>. In all cases, we show the cross-section integrated over the $`p_t`$ of the emitted photon subject to the constraint 1 GeV $`<p_t<10`$ GeV and over the invariant mass of the system $`X`$ subject to $`M_X<10`$ GeV. The photon flux of and the photon parton density functions of were used. Our results are summarized in Table 1. The range of $`z_0`$ values accessible to a future linear collider are restricted by the requirement that the photon appear in the detector. However, one does gain by going to higher beam energies since the dissociated system $`X`$ does not need to be seen. In this sense optimal configurations involve a relatively soft photon colliding with a hard photon. For the scenarios presented in the table, we find $`0.4<z_0<1.8`$ which is well into the region of interest. Due to the softness of the bremsstrahlung photon spectrum one gains appreciably in rate by progressing to higher beam energies. Note the very strong dependence upon the minimum angle of the detected photon. ## 3 Results: $`𝜸𝜸`$ mode Next we turn to the photon collider. For the flux of photons we use typical parameters : $`x=4.8`$, $`P_c=1`$ (i.e. negative helicity laser photons) and $`2\lambda =1`$ (i.e. 100% longitudinally polarized electron and positron beams). The choice $`2\lambda P_c=1`$ leads to a hard photon spectrum: $$_{0.65}^{z_{max}}𝑑z\frac{d_{\gamma \gamma }}{dz}=30\%$$ (5) where $`z_{max}=x/(x+1)`$, $`z=W_{\gamma \gamma }/\sqrt{s}`$ ($`\sqrt{s}/2`$ is the electron beam energy) and the integral down to $`z=0`$ is defined to give unity. We use the photon flux of and normalize it to obtain the $`e^+e^{}`$ cross-section using the “rule of thumb” $`_{\gamma \gamma }(z>0.65)0.15_{ee}`$ . In particular the $`e^+e^{}`$ rate is determined using $$\sigma _{ee}=\frac{0.15}{0.30}_0^{z_{max}^2}𝑑\tau _{\tau /z_{max}}^{z_{max}}𝑑yf(y,\tau )\sigma _{\gamma \gamma }$$ (6) where $$\frac{d_{\gamma \gamma }}{d\tau }=_{\tau /z_{max}}^{z_{max}}𝑑yf(y,\tau )$$ (7) and $`\tau =z^2`$. We are able to use a single photon luminosity function since the $`\gamma \gamma `$ cross-section is independent of the photon helicity (for transverse photons). Table 2 shows our results for the photon collider, assuming the same cuts as in the previous section. The $`z_0`$ range for the photon collider is similar to that for the $`e^+e^{}`$ collider, i.e. for the events in Table 2 $`0.4<z_0<1.8`$. For the photon collider, the photon spectrum is not soft. This means that ultimately the rate depletes as the beam energy increases. This can be seen in the table where the 500 GeV and 1 TeV rates are similar. Going to higher beam energies leads to a slow reduction in rate (however the typical $`z_0`$ range does still move to higher values). For example, at a 5 TeV collider the cross-section is 1.4 pb and the $`z_0`$ range is $`0.52`$ ($`\theta _\gamma =0.1`$). The drop in rate as one increases the detected photon angle is even more dramatic than in the $`e^+e^{}`$ mode and is a consequence of the harder photon spectrum. One can imagine further improving the capabilities of the photon collider by arranging to use one soft photon spectrum and one harder spectrum. The harder photon dissociates into a very forward system whilst the softer photon is easier to scatter into the detector. One way to do this would be to operate the photon collider with $`2P_c\lambda =+1`$ to produce the softer spectrum and $`2P_c\lambda =1`$ for the harder spectrum. Even better would be to operate in the $`e\gamma `$ mode where a soft bremsstrahlung photon is made to collide with a hard Compton photon. ## 4 Summary A linear $`e^+e^{}`$ collider operating at 500 GeV and beyond is the ideal place to study the diffractive production of high $`p_t`$ photons via $`\gamma \gamma \gamma X`$. It will have the capacity to provide very important information in abundance on the short distance domain of large rapidity gap physics. Operating in the photon collider mode offers the opportunity to further enhance the rate. In all cases, the rate grows rapidly with decreasing angle of the detected photon. This short note has focussed on the production of high $`p_t`$ photons. Also of tremendous interest is the production of high $`p_t`$ vector particles in general, e.g. $`\rho ,\omega ,\varphi `$ and $`J/\mathrm{\Psi }`$. These processes would also occur in abundance at a future linear collider. ## Acknowledgements Thanks to Albert de Roeck and Stefan Soldner-Rembold for helpful discussion and to Georgi Jikia for providing the code to compute the photon luminosity for the photon collider.
no-problem/9912/hep-ph9912324.html
ar5iv
text
# 1 Introduction ## 1 Introduction One of the most promising ideas for a high-energy accelerator to complement the LHC is a linear $`e^+e^{}`$ collider (LC) with a centre-of-mass energy $`E_{CM}`$ in the TeV range . The crucial parameters of such a LC are $`E_{CM}`$ and the luminosity. The optimal choice of $`E_{CM}`$ is constrained by technology and cost, but should be driven by physics arguments based on the accessibility of physics thresholds. One established threshold in the energy range of interest is that for $`e^+e^{}\overline{t}t`$ at about 350 GeV . A second threshold likely to be in this energy range is that for Higgs boson ($`H`$) production via the reaction $`e^+e^{}H+Z`$ . For some years , the precision electroweak data have favoured a relatively light Higgs boson, as suggested independently by supersymmetry. The most recent indication is that $`M_H<200`$ GeV at the 95 % confidence level , corresponding to a $`H+Z`$ threshold below about 300 GeV. Since supersymmetry is widely considered to be one of the most promising possible low-energy extensions of the Standard Model, it is desirable that any new collider offer good prospects of detecting at least some supersymmetric particles, as is the case of the LHC . The physics argument that has usually been employed to estimate the sparticle mass scale $`\stackrel{~}{m}`$ has been that of the naturalness of the gauge hierarchy, which suggests that $`\stackrel{~}{m}<1`$ TeV . A supporting argument has been the concordance of the gauge couplings measured at LEP and elsewhere with the predictions of supersymmetric Grand Unified Theories (GUTs) . However, this argument is sensitive only logarithmically to $`\stackrel{~}{m}`$, and is also vulnerable to GUT threshold effects due to particles beyond the minimal supersymmetric extension of the Standard Model (MSSM). The agreement of the Higgs mass range favoured by the precision electroweak data with that calculated in the MSSM is also encouraging, but is again only logarithmically sensitive to $`\stackrel{~}{m}`$, and hence unable to specify it with any accuracy. An independent argument for new physics around the TeV scale is provided by calculations of cold dark matter, which yield naturally a freeze-out density in the cosmologically allowed range: $`\mathrm{\Omega }_{CDM}h^2<0.3`$ (where $`\mathrm{\Omega }\rho /\rho _c`$, the critical density, and $`h`$ is the Hubble expansion rate in units of 100 km/s/Mpc), and that preferred by theories of structure formation: $`0.1<\mathrm{\Omega }_{CDM}h^2`$, if the mass of the cold dark matter particle is $`<10`$ TeV . The upper limit on $`\mathrm{\Omega }_{CDM}h^2`$ is fixed by the age of the Universe. For $`\mathrm{\Omega }_{tot}1`$, a lower limit on the age of the Universe of 12 Gyr implies an upper limit $`\mathrm{\Omega }_mh^2<0.3`$ on the total matter density, and hence $`\mathrm{\Omega }_{CDM}<\mathrm{\Omega }_m`$. This argument does not rely on the high-redshift supernova observations , but they do support it. A serendipitous prediction of $`\mathrm{\Omega }_{CDM}`$ is provided by the MSSM with $`R`$ parity conservation , if the lightest supersymmetric particle (LSP) is the lightest neutralino $`\chi `$, as in many versions of the MSSM. Indeed, it has been shown that the most ‘natural’ choices of MSSM parameters, from the point of view of the gauge hierarchy, yield a relic LSP density in the astrophysical and cosmological region $`0.1<\mathrm{\Omega }_\chi h^2<0.3`$. In this case, detailed calculations of the relic LSP abundance yield $`\mathrm{\Omega }_{CDM}0.3`$ only for $`m_\chi <600`$ GeV . An essential role in this relic density calculation is played by $`\chi \stackrel{~}{\mathrm{}}`$ coannihilation effects when the LSP is mainly a gaugino, which increase significantly the upper limit on the LSP mass quoted previously . The idea we propose in this paper is that the relic density calculation be used to specify the likelihood that a LC with given $`E_{CM}`$ will be above the sparticle pair-production threshold, and able to detect at least some supersymmetric cross section. The answer is necessarily higher than $`E_{CM}=2\times m_\chi ^{max}`$, since the process $`e^+e^{}\chi \chi `$ is not directly observable in models with a stable neutralino LSP $`\chi `$. On the other hand, as we discuss in more detail below, $`m_\chi ^{max}600`$ GeV is attained when $`m_\chi m_{\stackrel{~}{\tau }}`$, with $`m_{\stackrel{~}{\mu }},m_{\stackrel{~}{e}}`$ not much heavier, so one might expect that a LC with $`E_{CM}`$ not far above 1200 GeV should be sufficient. As we show in more detail below, a LC with $`E_{CM}=500`$ GeV or 1 TeV would only be able to detect supersymmetry in a fraction of the preferred dark matter region of MSSM parameter space. A LC with $`E_{CM}=1.5`$ TeV would probably cover the preferred region, but might miss some part of the $`\chi \stackrel{~}{\mathrm{}}`$ coannihilation ‘tail’ at large $`m_{1/2}`$, depending on the luminosity it attains. A LC with $`E_{CM}=2`$ TeV would, on the other hand, be able to cover all the cosmological region with a comfortable safety margin in terms of cross section, kinematic acceptance and astrophysical uncertainties. ## 2 Summary of LSP Density Calculations We assume $`R`$ parity is conserved, otherwise there would be no stable supersymmetric dark matter to interest us. We work within the constrained MSSM, in which all the supersymmetry-breaking soft scalar masses are assumed to be universal at the GUT scale with a common value $`m_0`$, and the gaugino masses are likewise assumed to be universal with common value $`m_{1/2}`$ at the GUT scale . The constrained MSSM parameters are chosen so as to yield a consistent electroweak vacuum with a value of $`\mathrm{tan}\beta `$ that is left free. The LEP lower limits on MSSM particles, including the lightest Higgs boson, suggest that $`\mathrm{tan}\beta >3`$, so we consider this and the higher value $`\mathrm{tan}\beta =10`$. We consider two possible values of the trilinear soft supersymmetry-breaking parameter: $`A=0,m_{1/2}`$, the latter being the value for which the constraint that the lowest-energy state not break charge and colour (CCB) is weakest , consistent with parameter choices out to the point at the tip of the cosmological region. When calculating the relic density of LSPs $`\chi `$, it is assumed that they were in thermal equilibrium prior to freeze-out at some temperature $`T_f`$. The relic density after freeze-out is then determined by the competition between the expansion rate of the Universe and the neutralino annihilation rate. Ultimately, the relic density is inversely related to the effective annihilation cross section $`\sigma _{eff}`$, which falls off as the square of the supersymmetry breaking scale. Thus, as the supersymmetry breaking scale is increased, the annihilation cross section decreases and the relic density increases. This is why an upper limit to the relic density puts an upper limit on the sparticle mass scale, and on the mass of the neutralino LSP, in particular. In regions where the neutralino is mainly a gaugino (usually a bino), as in many models of interest, such as those with GUT-scale universality relations among the sparticle masses, the annihilation rate is dominated by sfermion exchange. As one approaches the upper limit on the neutralino mass, the cross section is maximized by taking sfermion masses as small as possible: in this case, the sleptons $`\stackrel{~}{\mathrm{}}`$ are nearly degenerate with the neutralino LSP $`\chi `$ <sup>1</sup><sup>1</sup>1The GUT universality conditions then imply that the squarks are considerably heavier.. When the LSP is nearly degenerate with the next-to-lightest supersymmetric particle (NLSP), it is known that new important coannihilation channels must be included to determine the relic neutralino density. Thus, in addition to the self-annihilation process $`\chi \chi `$ anything, the effective annihilation cross section includes important contributions from coannihilation processes involving slightly heavier sparticles $`\stackrel{~}{X},\stackrel{~}{Y}`$: $`\chi \stackrel{~}{X}`$ anything, $`\stackrel{~}{X}\stackrel{~}{Y}`$ anything, weighted by the corresponding Boltzmann density suppression factors: $$\sigma _{eff}\sigma (\chi \chi )+\mathrm{\Sigma }_{\stackrel{~}{X}}e^{(m_{\stackrel{~}{X}}m_\chi )/T_f}\sigma (\chi \stackrel{~}{X})+\mathrm{\Sigma }_{\stackrel{~}{X},\stackrel{~}{Y}}e^{(m_{\stackrel{~}{X}}+m_{\stackrel{~}{Y}}2m_\chi )/T_f}\sigma (\stackrel{~}{X}\stackrel{~}{Y})$$ (1) In the parameter region of interest after taking into account the LEP exclusions of light sparticles, the most important coannihilation processes are those involving the NLSP $`\stackrel{~}{\tau }`$ and other sleptons: $`\stackrel{~}{e},\stackrel{~}{\mu }`$, which are all taken into account in the following analysis . Several of these coannihilation cross sections are much larger than that for $`\chi \chi `$ annihilation close to threshold, because they do not exhibit $`P`$-wave suppressions. Therefore, coannihilation is an essential complication. As noted above, since the resulting LSP relic density $`\mathrm{\Omega }_\chi h^2`$ increases as $`\sigma _{eff}`$ decreases, and since $`\sigma _{eff}`$ decreases as $`m_0,m_{1/2}`$ increase, one expects generically that $`\mathrm{\Omega }_\chi h^2`$ should increase with increasing $`m_0,m_{1/2}`$. This simple correlation is complicated in the presence of nearby $`s`$-channel $`Z^0`$ and Higgs poles in the annihilation cross sections, but the LEP exclusions now essentially rule out this possibility . As mentioned earlier in the paper, the preferred range of cold dark matter density is $`0.1<\mathrm{\Omega }_{CDM}h^2<0.3`$. It is possible that all the cold dark matter may not consist of LSPs $`\chi `$, so we can at best assume that $`\mathrm{\Omega }_\chi h^2\mathrm{\Omega }_{CDM}h^2<0.3`$. However, this upper limit on $`\mathrm{\Omega }_\chi h^2`$ is sufficient to infer an upper limit on $`m_0,m_{1/2}`$ <sup>2</sup><sup>2</sup>2On the other hand, the lower bound on $`\mathrm{\Omega }_{CDM}h^2>0.1`$ cannot be transferred to a lower bound on $`\mathrm{\Omega }_\chi `$, and hence there are no corresponding lower bounds on $`m_0,m_{1/2}`$, except for those imposed by slepton searches and/or the requirement that the $`\stackrel{~}{\tau }`$ not be the LSP.. In , the values of the two key supersymmetry-breaking inputs $`m_0,m_{1/2}`$ were constrained so that neutralino relic density should fall within the desired range. Roughly speaking, when $`m_{1/2}<400`$ GeV, there is a relatively broad allowed range for $`m_0`$ between about 50 and 150 GeV, depending on $`\mathrm{tan}\beta ,A`$ and the sign of $`\mu `$. For values of $`m_{1/2}>400`$ GeV, coannihilation becomes important, and $`m_0`$ is restricted to a relatively narrow range of typical thickness $`\delta m_020`$ GeV. The maximum value of $`m_{1/2}`$ is determined by the point where there is no longer any value of $`m_0`$, such that the neutralino mass is less than the $`\stackrel{~}{\tau }_R`$ mass and $`\mathrm{\Omega }_{CDM}h^2<0.3`$. This occurs when $`m_{1/2}1400`$ GeV, corresponding to the neutralino mass of about 600 GeV mentioned previously. This is the essence of our argument that the relic density calculation can be used to specify the $`e^+e^{}`$ collider energy required to produce sparticles. The upper limit to the neutralino mass including coannihilation effects of $`m_\chi <600`$ GeV is relatively insensitive to such MSSM parameters as $`\mathrm{tan}\beta `$ and $`A`$. As in , we consider here the two cases $`\mathrm{tan}\beta =3,10`$, and initially set $`A`$ close to the weak-CCB value $`A=m_{1/2}`$. As mentioned earlier, the upper limit on $`m_\chi `$ implies that the threshold for pair-producing sparticles must be at least $`E_{CM}=1200`$ GeV. In fact, when the limit $`m_\chi 600`$ GeV is reached, one also has $`m_\chi =m_{\stackrel{~}{\tau }_1}`$, where the NLSP $`\stackrel{~}{\tau }_1`$ is the lighter stau mass eigenstate, so the threshold for the reaction $`e^+e^{}\stackrel{~}{\tau }^+\stackrel{~}{\tau }^{}`$ is also $`1200`$ GeV. Moreover, the mass of the $`\stackrel{~}{e}_R`$ is also not far above $`600`$ GeV, so the threshold for $`e^+e^{}\stackrel{~}{e}_R^+\stackrel{~}{e}_R^{}`$ is also not far beyond $`1200`$ GeV. In addition, it is easy to check that even if one allows $`m_\chi <m_{\stackrel{~}{\tau }_1}`$, which is possible if $`m_\chi <600`$ GeV, the threshold for $`e^+e^{}\stackrel{~}{\tau }^+\stackrel{~}{\tau }^{}`$ is never above $`1200`$ GeV. These arguments are all suggestive that $`E_{CM}=1200`$ GeV may be sufficient for an $`e^+e^{}`$ linear collider to observe supersymmetry, but any such conclusion must hinge upon the analysis of the observability of the sparticle pair-production cross section that we undertake next. ## 3 Analysis of Sparticle Pair-Production Cross Sections In order to determine the region of the $`(m_0,m_{1/2})`$ plane that can be explored with a linear $`e^+e^{}`$ collider of given $`E_{CM}`$, we have calculated the total observable production cross section for the pair production of sparticles $`e^+e^{}\stackrel{~}{X}\stackrel{~}{Y}`$, where $`\stackrel{~}{X}`$ and $`\stackrel{~}{Y}`$ are not necessarily particle and antiparticle . In this context, ‘observable’ means that we do not include pair production of the LSP: $`e^+e^{}\chi \chi `$. Nor do we include sneutrino pair production: $`e^+e^{}\stackrel{~}{\nu }\stackrel{~}{\overline{\nu }}`$, although some $`\stackrel{~}{\nu }`$ decays might be visible. Also, the production cross sections for heavier neutralinos $`\chi ^{}`$, e.g., $`e^+e^{}\chi \chi ^{}`$, are corrected for invisible $`\chi ^{}`$ decay branching ratios. Finally, we assume that the ordinary particles emitted in a sparticle decay chain are observable only if the mass difference $`\mathrm{\Delta }M>3`$ GeV. We assume an integrated luminosity $`=100`$ fb<sup>-1</sup> . In order to estimate the corresponding sensitivity to the new-physics cross section $`\sigma `$, the relevant quantity is $`B\sqrt{\sigma _{bg}}/ϵ`$, where $`\sigma _{bg}`$ is the residual cross section for background processes, and $`ϵ`$ is the signal-detection efficiency. As usual, a five-standard-deviation discovery is likely if $`\sigma >5\times B/\sqrt{}`$, whereas, in the absence of any observation, new-physics processes with $`\sigma >2\times B/\sqrt{}`$ will be excluded at about the 95 % confidence level. At LEP 2, for mass differences between the produced sparticle and the LSP that are not too small, the background to searches for charginos $`\chi ^\pm `$ and sleptons is mainly due to $`W^\pm `$ production, and typical values for $`B`$ were in the range $`3÷6`$ (fb$`^{+\frac{1}{2}}`$). At the LC we expect cleaner background conditions for both slepton and chargino searches because the $`W^\pm `$ should be more easily distinguishable, and also $`\sigma (e^+e^{}W^+W^{})`$ is smaller. It is therefore likely that B is smaller than at LEP 2. We adopt a conservative approach and scale $`B`$ roughly by $`\sigma (e^+e^{}W^+W^{})`$, taking $`B=2`$ (fb$`^{+\frac{1}{2}}`$), which gives a lower limit on the discoverable cross section of 1 fb, and an exclusion upper limit of 0.4 fb. Fig. 1 shows the physics discovery reach in the $`(m_0,m_{1/2})`$ plane for $`\mathrm{tan}\beta =3,10`$ provided by the processes $`e^+e^{}\stackrel{~}{\mathrm{}}^+\stackrel{~}{\mathrm{}}^{}`$, neutralinos and charginos $`\chi ^+\chi ^{}`$ for collisions at $`e^+e^{}`$ collisions at $`E_{CM}=500,1000,1250,1500`$ GeV, compared with the allowed cold dark matter region (shaded). The solid lines in Fig. 1 correspond to the estimated discovery cross section of 1 fb for $`e^+e^{}\stackrel{~}{\mathrm{}}^+\stackrel{~}{\mathrm{}}^{}`$, and the broken lines to the kinematic limit $`m_{\chi ^\pm }=E_{CM}/2`$. We see no big differences between the plots for the different signs of $`\mu `$, nor indeed for the different values of $`\mathrm{tan}\beta `$. We note that $`e^+e^{}\stackrel{~}{\mathrm{}}^+\stackrel{~}{\mathrm{}}^{}`$ (solid lines) provides the greatest reach for each of the values $`E_{CM}=500,1000,1250,1500`$ GeV studied, and that chargino pair production $`e^+e^{}\chi ^+\chi ^{}`$ (broken lines) becomes progressively less important as $`E_{CM}`$ increases. We see in Fig. 1 the extent to which the region favoured by the cosmological requirement that $`0.1\mathrm{\Omega }_\chi h^20.3`$ may be covered by LC searches at different energies. In particular, about a half of this region is covered by sparticle searches at $`E_{CM}=500`$ GeV, a somewhat larger fraction (but not all) is covered at $`E_{CM}=1000`$ GeV, and full coverage of the favoured region is approached only when $`E_{CM}=1500`$ GeV <sup>3</sup><sup>3</sup>3We note in passing that a LC with $`E_{CM}=500`$ GeV would have seemed perfectly adequate if coannihilation were not taken into account.. The reason why more than 1200 GeV is required is the $`P`$-wave threshold suppression for the observable processes with the lowest thresholds near the point of the cosmological region, namely the reactions $`e^+e^{}\stackrel{~}{\mathrm{}}_R^+\stackrel{~}{\mathrm{}}_R^{}`$. Fig. 2 shows as three-dimensional ‘mountains’ the full observable sparticle cross section for $`\mathrm{tan}\beta =10`$ and $`\mu >0`$ for $`E_{CM}=500,1000,1250`$ and 1500 GeV, including also other pair-production processes. The irregularities in the outline of the three-dimensional ‘mountain’ plot correspond to the opening up of different sparticle pair-production thresholds. We see again that $`E_{CM}=500`$ GeV is not adequate to cover much of the cosmological region, that $`E_{CM}=1000`$ GeV does not cover a significant fraction of the high-$`m_{1/2}`$ tail opened up by coannihilation, and that $`E_{CM}1500`$ GeV covers the cosmological region. We find similar features for $`\mathrm{tan}\beta =10`$ and $`\mu <0`$, and also for $`\mathrm{tan}\beta =3`$ and both signs of $`\mu `$ (not shown). We now return to the tip of the cosmological tail, which occurs when $`m_\chi 630(610)`$ GeV for $`\mathrm{tan}\beta =3(10)`$ for our default option $`\mathrm{\Omega }_\chi h^20.3`$, and explore in more detail how much $`E_{CM}`$ beyond 1200 GeV is required to be sure of detecting supersymmetry. Fig. 3 shows the contributions to the effective observable cross section from the dominant reactions $`e^+e^{}\stackrel{~}{\mathrm{}}_{L,R}^+\stackrel{~}{\mathrm{}}_{L,R}^{}`$. Close to threshold, only pair production of the $`\stackrel{~}{\mathrm{}}_R`$ states is accessible, which exhibits a $`P`$-wave suppression. The associated-production process $`e^+e^{}\stackrel{~}{e}_L\stackrel{~}{e}_R`$ kicks in at somewhat higher energies, and rapidly dominates, because of its $`S`$-wave threshold. This is the origin of the kink seen in the rise of the total cross section in each of the panels of Fig. 3, where the discovery and exclusion sensitivities are also shown as horizontal broken lines. We see that $`E_{CM}`$ only just above 2$`m_\chi 1200`$ GeV is not sufficient for sparticle discovery, because of the small observable cross section. We recall that, for our assumed integrated luminosity of 100 fb<sup>-1</sup> and detector performances, the discovery cross-section limit would be 1 fb, as indicated by the upper horizontal broken line in Fig. 3. Of course, this may be altered by different assumptions on the integrated luminosity and/or detection efficiency <sup>4</sup><sup>4</sup>4We note, in particular, that higher luminosities may be achievable at higher $`E_{CM}`$.. Each of the panels in Fig. 3 exhibits alternative curves to be compared with our default choices $`\mathrm{\Omega }_\chi h^2=0.3`$ and $`A=0`$. The curves for $`\mathrm{\Omega }_\chi h^2=0.4`$ are for instruction only. In this case, one finds $`m_\chi <740(710)`$ GeV for $`\mathrm{tan}\beta =3(10)`$, but it is very difficult to reconcile such a large value of $`\mathrm{\Omega }_\chi h^2`$ with the emerging measurements of cosmological parameters <sup>5</sup><sup>5</sup>5For the record, for $`\mathrm{\Omega }_\chi h^2<0.5`$, the upper limit on the neutralino mass increases to $`m_\chi <830(800)`$ GeV for $`\mathrm{tan}\beta =3(10)`$.. In fact, we actually believe that allowing $`\mathrm{\Omega }_\chi h^20.3`$ is already quite conservative. For the preferred observational value $`h1/\sqrt{2}`$, this would correspond to $`\mathrm{\Omega }_\chi 0.6`$, which extends far beyond the currently favoured range $`\mathrm{\Omega }_\chi 0.4`$. If instead one enforces $`\mathrm{\Omega }_\chi h^20.2`$, one finds that the maximum value of the LSP mass becomes $`m_\chi 520(500)`$ GeV, for $`\mathrm{tan}\beta =3(10)`$ and $`E_{CM}=1500`$ TeV would be adequate, as seen in Fig. 3. Indeed, in this case, $`E_{CM}=1200`$ GeV would be sufficient to cover all the region of the $`(m_0,m_{1/2})`$ favoured by cosmology. We also show in Fig. 3 comparisons between the cross sections at the extreme points for $`A=0`$ and $`m_{1/2}`$. Our conclusions are clearly insensitive to the ambiguity in the choice of $`A`$. ## 4 Conclusions Finally, we show in Fig. 4 the fraction of the cosmologically-allowed region of the $`(m_{1/2},m_0)`$ plane that can be explored by a LC as a function of the accessible limiting cross section $`\sigma _{lim}`$, for different values of $`E_{CM}`$. When the detector perfomances are specified, the values of $`\sigma _{lim}`$ correspond to different values of the available luminosity, as indicated. We see in Fig. 4 that a LC with $`E_{CM}=1.5`$ TeV would cover all the cosmological region if $`\sigma _{lim}<5`$ fb <sup>6</sup><sup>6</sup>6A LC with $`E_{CM}=2`$ TeV would always cover all the cosmological region, even for a very pessimistic assumption on $`\sigma _{lim}`$., and one with $`E_{CM}=1.25`$ TeV if $`\sigma _{lim}<0.5`$ fb. On the other hand, a LC with $`E_{CM}=1`$ TeV could never cover all the cosmological region, and a LC with $`E_{CM}=0.5`$ TeV covers $`60`$ % of it <sup>7</sup><sup>7</sup>7Fig. 4 is plotted using a linear measure for the cosmological region. The prospects for lower-energy machines would seem brighter if one used a logarithmic measure of the parameter space, e.g., using this measure, a LC with $`E_{CM}=0.5`$ TeV would cover over 80 % of the cosmological region.. The conclusions to be drawn from this analysis are somewhat subjective, since they depend how much you are prepared to bet at what odds. It could well be that new cosmological data might inform better your choice. For example, you could become more sanguine about the prospects for a lower-energy LC if the upper limit on $`\mathrm{\Omega }_\chi h^2`$ could be decreased to 0.2. Our point in this paper has been to establish that there is a phenomenological connection between the LC energy and supersymmetric dark matter, and we believe that Fig. 4 summarizes the best advice we can offer at the beginning of the third millennium. Acknowledgments We thank Toby Falk for many related discussions. The work of K.A.O. was supported in part by DOE grant DE–FG02–94ER–40823.
no-problem/9912/hep-th9912276.html
ar5iv
text
# 1 Introduction ## 1 Introduction The vast gap between the electroweak scale and the Planck scale, known as the ‘hierarchy problem’, remains as a major mystery in particle physics. Recently, it has been suggested that large compactified extra dimensions may provide a solution to the hierarchy problem. The relation between the four-dimensional Planck scale $`M_{\mathrm{Pl}}`$ and the higher dimensional scale $`M`$ is given by $`M_{\mathrm{Pl}}^2=M^{n+2}V_n`$, where $`V_n`$ is the volume of the extra compactified dimensions. If $`V_n`$ is large enough, $`M`$ can be on the order of several TeV. Unfortunately, this scenario alone does not completely solve the problem. The original hierarchy can be translated into another hierarchy between $`M`$ and the compactification scale $`r_c^1=V_n^{1/n}`$ . In Ref. , Randall and Sundrum (RS) proposed an alternative scenario based on an extra dimension compactified on $`S^1/Z_2`$ with three-branes located at two boundaries. Standard Model fields are assumed to be confined on a distant three-brane, while gravitons propagate in the five-dimensional bulk. The background metric takes the form $`ds^2=G_{MN}dx^Mdx^N=e^{2\sigma (y)}\eta _{\mu \nu }dx^\mu dx^\nu +dy^2,`$ (1) where $`x^M=(x^\mu ,y)`$ is the coordinate of the five-dimensional spacetime. By the solving the five-dimensional Einstein equation, the function $`\sigma (y)`$ is found. $`\sigma (y)=k|y|+\sigma _0,`$ (2) where $`k`$ is the curvature scale related to the five-dimensional cosmological constant $`\mathrm{\Lambda }`$. It was then shown that the effective Planck mass in four dimensions is given by the formula $`M_{\mathrm{Pl}}^2`$ $`=`$ $`{\displaystyle \frac{M^3}{k}}(1e^{2\pi kr_c})e^{2\sigma _0},`$ (3) where $`r_c`$ is the radius of the fifth dimension. Two points are to be noted here. The first point is that according to the above formula (3), the four-dimensional Planck mass is ‘universal’; it takes the same value on both branes located at different boundaries. On the other hand, it was argued that mass scales on the distant brane located at $`y=\pi r_c`$ are rescaled by the warp factor $`e^{\pi kr_c}`$ . It is this difference that generates the hierarchy $`{\displaystyle \frac{M_W}{M_{\mathrm{Pl}}}}{\displaystyle \frac{Me^{\pi kr_c}}{M}}1.`$ (4) The second point to be noted is that the formula (3) apparently contains the integration constant $`\sigma _0`$, which is left undetermined by the boundary condition of the five-dimensional theory. Moreover, a certain kind of symmetry under the exchange of two boundaries is not manifest in this formula (3), as is pointed out in Ref.. In this paper, we argue that these two points are intimately related. We carefully discuss the induced metric and the brane coordinates, and point out that the value of the four-dimensional Planck mass differs for hidden and visible brane; that is,it is non-universal. As a result, we find that the four-dimensional Planck mass does not depend on $`\sigma _0`$, and exchanging-symmetry is manifest. This paper is organized as follows. After a review of the RS model in §2, we present our formula for the Planck mass in §3. In §4, we apply our prescription for a bulk scalar field as an example and examine the masses of its Kaluza-Klein (KK) modes. We show that these masses are also independent of $`\sigma _0`$. As a result, in the $`k>0`$ scenario, we can adjust $`M`$ to the TeV region since the Planck mass as the gravitational coupling constant on the distant brane should be set to $`10^{18}`$ GeV, and we find that the mass of the KK modes is on the order of $`M`$. Section 5 is devoted to conclusion and discussion. ## 2 RS model First, we review the derivation of the four-dimensional Planck mass within the scenario of Ref.. The background metric of the model takes the form $`ds^2=G_{MN}dx^Mdx^N=e^{2\sigma (y)}\eta _{\mu \nu }dx^\mu dx^\nu +dy^2,`$ (5) where $`x^M=(x^\mu ,y)`$ is the coordinate of the five-dimensional spacetime, and the fifth dimension is compactified on $`S^1/Z_2`$ with radius $`r_c`$. The fundamental region of the fifth dimension is given by $`0y\pi r_c`$. A set of three-branes is located at each fixed point $`y=y_i`$ of $`S^1/Z_2`$. The brane at $`y_0=0`$ is called a ‘hidden’ brane, and the brane at $`y_1=\pi r_c`$ is called ‘visible’. Here and hereafter, we use the subscripts $`i=0,1`$ for quantities at $`y=0`$, $`\pi r_c`$, respectively. The action is $`S`$ $`=`$ $`S_{\mathrm{gravity}}+S_{\mathrm{brane}},`$ $`S_{\mathrm{gravity}}`$ $`=`$ $`{\displaystyle d^4x_{\pi r_c}^{\pi r_c}𝑑y\sqrt{G}\left\{\mathrm{\Lambda }+2M^3R\right\}},`$ $`S_{\mathrm{brane}}`$ $`=`$ $`{\displaystyle \underset{i=0,1}{}}{\displaystyle d^4x\sqrt{g^{(i)}}\left\{_{(i)}V_{(i)}\right\}},`$ (6) where $`g_{\mu \nu }^{(i)}(x)=G_{\mu \nu }(x,y=y_i)`$ is the induced metric on the $`i`$-th brane, and the brane tensions $`V_{(i)}`$ are subtracted from the three-brane Lagrangians. With the metric (5), the five-dimensional Einstein equation reduces to two differential equations for $`\sigma (y)`$ (using $`\sigma ^{}=_y\sigma `$) : $`(\sigma ^{}(y))^2`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Lambda }}{24M^3}},`$ (7) $`\sigma ^{\prime \prime }(y)`$ $`=`$ $`{\displaystyle \underset{i=0,1}{}}{\displaystyle \frac{V_{(i)}}{12M^3}}\delta (yy_i).`$ (8) The solution to (7) is given by $`\sigma (y)=k|y|+\sigma _0,`$ (9) with $`k=\pm \sqrt{{\displaystyle \frac{\mathrm{\Lambda }}{24M^3}}}.`$ (10) Here, $`\sigma _0`$ is the integration constant which is not determined by the boundary condition of the five-dimensional theory.<sup>1</sup><sup>1</sup>1The integration constant $`\sigma _0`$ might be determined by the fundamental theory in higher dimensions. Conventionally, this integration constant $`\sigma _0`$ is omitted by saying that it just amounts to an overall constant rescaling of $`x^\mu `$. For any value of $`\sigma _0`$, the consistency of the solution (9) with the second equation (8) requires that the brane tensions and bulk cosmological constant are related by $`k={\displaystyle \frac{V_{(0)}}{24M^3}}={\displaystyle \frac{V_{(1)}}{24M^3}}.`$ (11) Following to Ref., we consider a fluctuation $`h_{\mu \nu }`$ of the Minkowski spacetime metric $`\eta _{\mu \nu }`$, and replace $`\eta _{\mu \nu }`$ by a four-dimensional metric $`\overline{g}_{\mu \nu }=\eta _{\mu \nu }+h_{\mu \nu }`$. Then we have $`S_{\mathrm{eff}}`$ $`=`$ $`{\displaystyle d^4x_{\pi r_c}^{\pi r_c}𝑑y\mathrm{\hspace{0.17em}2}M^3e^{2\sigma (y)}\sqrt{\overline{g}}R(\overline{g})}+\mathrm{}`$ (12) $``$ $`2M_{\mathrm{Pl}}^2{\displaystyle d^4x\sqrt{\overline{g}}R(\overline{g})}+\mathrm{},`$ where $`R(\overline{g})`$ denotes the four-dimensional scalar curvature constructed from $`\overline{g}_{\mu \nu }`$. This gives the formula (3) for the ‘universal’ four-dimensional Planck mass,<sup>2</sup><sup>2</sup>2We use the normalizaton of Ref.. $`M_{\mathrm{Pl}}^2={\displaystyle \frac{M^3}{k}}(1e^{2\pi kr_c})e^{2\sigma _0}.`$ (13) We stress that this expression for the Planck mass depends on the integration constant $`\sigma _0`$. With the choice $`\sigma _0=0`$, as in Ref., the above expression reduces to $`M_{\mathrm{Pl}}^2={\displaystyle \frac{M^3}{k}}(1e^{2\pi kr_c}).`$ (14) With this choice, one is forced to take $`M`$ to be of the order of $`M_{\mathrm{Pl}}10^{18}`$ GeV when considering $`k>0`$. As pointed out in Ref., we are free to choose the $`y`$-independent constant $`\sigma _0`$ in Eq. (9). The particular choice $`\sigma _0=\pi kr_c/2`$ was made so as to meet the requirement that the expression for $`M_{\mathrm{Pl}}`$ is manifestly invariant with the respect to the change $`kk`$, which amounts to exchanging the role of the two boundaries. Then, the four-dimensional Planck mass can be written $`M_{\mathrm{Pl}}^2={\displaystyle \frac{2M^3}{k}}\mathrm{sinh}(2\pi kr_c).`$ (15) As was noted in Ref., however, it is almost certainly true that a change of $`\sigma _0`$ has no net physical effect and must not change the values of four-dimensional physical quantities. Therefore the physical quantities should have the above exchanging-symmetry without choosing $`\sigma _0`$. In other words, physical quantities, including Planck mass, should have the following two properties. First, they are independent of $`\sigma _0`$. Second, they are not affected by the above brane-exchanging. In the next section, we present a prescription that naturally realizes these two properties. ## 3 Four-dimensional effective Planck mass As stated above, the choice of $`\sigma _0`$ has no net physical effect, and it must not change the values of physical quantities. That is, all the physical quantities, including the Planck mass, must be independent of $`\sigma _0`$. In this sense $`\sigma _0`$ may be regarded as a gauge degree of freedom. In particular, the $`\sigma _0`$-independence of the four-dimensional Planck mass may be understood by the following argument. Observe that $`\sigma _0`$ determines the ratio of the length scales of the fifth-dimensional direction and four-dimensional direction at $`y=0`$. Therefore, after we have integrated over the full fifth dimension when calculating $`M_{\mathrm{Pl}}`$, the freedom of this ratio will be invisible in the effective theory. To find the four-dimensional Planck mass more carefully, it is important to make it clear in which location $`y`$ the four-dimensional Planck mass is measured. To this end, we need to reconsider the choice of the brane coordinate and the induced metric. We first recall the general situation. When the $`i`$-th brane with the brane coordinate $`\xi _i^\mu `$ is embedded into five-dimensional spacetime by $`x^\mu =x^\mu (\xi _i)`$ and $`y=y_i`$, the induced metric on it is given by $`g_{\mu \nu }^{(i)}\left(\xi _i\right)={\displaystyle \frac{x^M}{\xi _i^\mu }}{\displaystyle \frac{x^N}{\xi _i^\nu }}G_{MN}\left(x=x\left(\xi _i\right),y=y_i\right).`$ (16) In the discussion given in §2, the implicit choice $`x^\mu =\xi _i^\mu `$ was made so that $`g_{\mu \nu }^{(i)}=G_{\mu \nu }(y=y_i)`$. When the fine-tuning conditions (11) are satisfied, the branes are flat and the induced metrics generally take the form $`g_{\mu \nu }^{(i)}=e^{2\alpha }\eta _{\mu \nu }`$, with the $`x^\mu `$-independent constant $`\alpha `$. In view of Eq. (16), the corresponding brane coordinates are uniquely determined by $`\xi _i^\mu =e^{\alpha \sigma _i}x^\mu `$ (up to a Poincaré transformation). Therefore, when discussing a four-dimensional effective theory on the brane, one should use the correct set of the induced metric and brane coordinate as $`\left(g_{\mu \nu }^{(i)}=e^{2\alpha }\eta _{\mu \nu },\xi _i^\mu =e^{\alpha \sigma _i}x^\mu \right).`$ (17) This is the point of our treatment. With this correct set, we can determine the relation of the five-dimensional scale M and the four-dimensional Planck scale $`M_{\mathrm{Pl}(i)}`$ on the $`i`$-th brane by integrating over the fifth dimension as $`S_{eff}`$ $`=`$ $`2M^3{\displaystyle d^4x_{\pi r_c}^{\pi r_c}𝑑ye^{2\sigma \left(y\right)}\sqrt{\overline{g}}R(\overline{g};x)}+\mathrm{}`$ (18) $`=`$ $`2M_{\mathrm{Pl}(i)}^2{\displaystyle d^4\xi _i\sqrt{\overline{g}^{(i)}}R(\overline{g}^{(i)};\xi _i)}+\mathrm{},`$ where $`R(\overline{g}^{(i)},\xi _i)`$ is the four-dimensional scalar curvature constructed from the induced metric $`\overline{g}_{\mu \nu }^{(i)}`$ and the coordinate $`\xi _i^\mu `$. We note that, following Ref., we define $`M_{\mathrm{Pl}}`$ as a coefficient of the Einstein-Hilbert (EH) term. This is a proper definition of graviton self-couplings contained in the EH term. Alternatively, we can determine $`M_{\mathrm{Pl}}`$ from the graviton coupling to the matter stress tensor. We confirmed that both methods yield the same result, given below. We now determine the relation between $`M_{\mathrm{Pl}}`$ and the five-dimensional quantities $`M,kandr_c`$ by using the set (17). For definiteness, let us choose $`\alpha =0`$ so that the induced metric is precisely Minkowskian (as is usual in field theory in flat space-time). This means that we choose ($`\eta _{\mu \nu },\xi _i^\mu =e^{\sigma _i}x^\mu `$) as the induced metric and brane coordinate. With this choice, we change the integration variables to $`\xi _i^\mu =e^{\sigma _i}x^\mu `$ and contract the indices with $`g_{\mu \nu }^{(i)}=\eta _{\mu \nu }`$. We then obtain $`S_{eff}`$ $`=`$ $`{\displaystyle d^4\xi _i_{\pi r_c}^{\pi r_c}𝑑y2M^3e^{2(\sigma (y)\sigma _i)}\sqrt{\overline{g}^{(i)}}R(\overline{g}^{(i)};\xi _i)}+\mathrm{}`$ (19) $``$ $`2M_{\mathrm{Pl}(i)}^2{\displaystyle d^4\xi _i\sqrt{\overline{g}^{(i)}}R(\overline{g}^{(i)};\xi _i)}+\mathrm{},`$ where $`\sigma _i=\sigma (y_i)`$. It follows that $`M_{\mathrm{Pl}(i)}^2`$ $`=`$ $`{\displaystyle _{\pi r_c}^{\pi r_c}}𝑑yM^3e^{2(\sigma (y)\sigma _i)}`$ (20) $`=`$ $`M^3e^{2(\sigma _i\sigma _0)}{\displaystyle _{\pi r_c}^{\pi r_c}}𝑑ye^{2k|y|}`$ $`=`$ $`{\displaystyle \frac{M^3}{k}}(1e^{2\pi kr_c})e^{2(\sigma _i\sigma _0)}.`$ With (20), it is clear that $`M_{\mathrm{Pl}(i)}`$ is independent of $`\sigma _0`$. Explicitly, we find $`M_{\mathrm{Pl}(0)}^2={\displaystyle \frac{M^3}{k}}(1e^{2\pi kr_c})`$ (21) on the brane at $`y=0`$ and $`M_{\mathrm{Pl}(1)}^2={\displaystyle \frac{M^3}{k}}(e^{2\pi kr_c}1)`$ (22) on the brane $`y=\pi r_c`$. Note that these expressions are transformed into each other by exchanging $`k`$ with $`k`$. Thus our results naturally possess the two properties stated in the previous section. We note that our expression (21) for the brane at $`y=0`$ coincides with Eq. (14), which is derived by simply neglecting $`\sigma _0`$. Therefore in this case, we have explicitly confirmed the naive expectation that $`\sigma _0`$ may be absorbed by the rescaling of $`x^\mu `$, since our expression (21) takes account of $`\sigma _0`$ by using the correct set ($`\eta _{\mu \nu },\xi _{i=0}^\mu =e^{\sigma _0}x^\mu `$) of the induced metric and brane coordinate. The same is not true for the expression (22), however. When we consider the scenario of Ref., in which Standard Model fields are assumed to be confined on the brane at $`y=\pi r_c`$ with a negative tension, the naive expectation is no longer correct, and the original expression (14) should be modified to our (22). The origin of the discrepancy can be understood as follows. If one tries to absorb $`\sigma _0`$ by the rescaling $`\xi _{i=1}^\mu =e^{\sigma _0}x^\mu `$ as in the $`y=0`$ case, the induced metric $`\eta _{\mu \nu }`$ cannot be used, since ($`\eta _{\mu \nu },\xi _{i=1}^\mu =e^{\sigma _0}x^\mu `$) is not the correct set of the induced metric and brane coordinate. The correct set is ($`\eta _{\mu \nu },\xi _{i=1}^\mu =e^{\sigma _1}x^\mu `$). Usihg this correct set, one finds that the Planck scale at $`y=\pi r_c`$ is given by our formula (22). The most important aspect of the RS model is that it gives rise to a localized graviton field. Our results (21) and (22) can naturally be understood from this fact; the small Planck scale $`M_{\mathrm{Pl}(0)}`$ arises because of the localized graviton in the fifth dimension near the brane of positive tension, while the large Planck scale $`M_{\mathrm{Pl}(1)}`$ arises because of the small overlap of the graviton with the brane of negative tension.<sup>3</sup><sup>3</sup>3To be precise, $`M_{\mathrm{P1}(1)}`$ should be identified as the Planck scale in models with the Standard Model confined on the brane at $`y=\pi r_c`$, and $`M_{\mathrm{Pl}(0)}`$ should be identified as that in models with the SM confined at $`y=0`$. In any case, we have only one Planck scale in a given model. A striking feature of our results (21) and (22) is that the relative size of four and five-dimensional Planck scales crucially depends on the location at which $`M_{\mathrm{Pl}}`$ is measured. In the model in which Standard Model fields are confined on the positive tension brane at $`y=0`$, as in Ref., we have the relation (21), from which $`M`$ is of the same order as $`M_{\mathrm{Pl}(0)}10^{19}`$ GeV, that is, $`MM_{\mathrm{Pl}(0)}`$. This conclusion is the same as that in the original proposal, of course. In the model in which Standard Model fields are confined on the negative tension brane at $`y=\pi r_c`$, as in Ref., however, we now have our modified relation (21), which gives $`M_{\mathrm{Pl}(1)}^2{\displaystyle \frac{M^3}{k}}e^{2\pi kr_c}.`$ (23) We see that the fundamental mass scale $`M`$ becomes much smaller than the Planck scale, unlike in the original proposal. As a result, it is perfectly possible that the fundamental scale $`M`$ lies in the TeV region. We need to use the expressions (21) and (22) of the four-dimensional Planck mass properly, depending on the model employed. ## 4 Physical mass scale In order to check from another viewpoint that the physical quantities are independent of $`\sigma _0`$, we consider a massless bulk scalar field as an illustrative example, and examine the masses of the KK modes. Extensions to the case of a massive bulk scalar field and a bulk gauge field are straightforward. The action is given by $`S_{\mathrm{scalar}}`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle d^5x\sqrt{G}G^{MN}_M\mathrm{\Phi }_N\mathrm{\Phi }}`$ (24) $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle d^5xe^{2\sigma }\mathrm{\Phi }(\mathrm{}+e^{2\sigma }_ye^{4\sigma }_y)\mathrm{\Phi }},`$ where $`\mathrm{}\eta ^{\mu \nu }_\mu _\nu `$. The KK mode expansion is $`\mathrm{\Phi }(x,y)={\displaystyle \underset{n0}{}}\phi _n(x)\chi _n(y),`$ (25) where the mode functions $`\chi _n(y)`$ are chosen to satisfy $`{\displaystyle \frac{d}{dy}}\left(e^{4\sigma }{\displaystyle \frac{d}{dy}}\right)\chi _n(y)=M_n^2e^{2\sigma }\chi _n(y)`$ (26) with mass eigenvalues $`M_n`$. The solutions to this equation are related to Bessel functions $`J_2`$ and $`Y_2`$ of order two. We should be careful with Eq. (26) however, since it still contains $`\sigma _0`$. As the orthnormality condition for $`\chi _n`$, let us take $`{\displaystyle _{\pi r_c}^{\pi r_c}}𝑑ye^{2\sigma }\chi _m\chi _n=\delta _{mn}e^{2\sigma _i}.`$ (27) Then we have $`S_{\mathrm{scalar}}`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{n0}{}}{\displaystyle d^4xe^{2\sigma _i}\phi _n(\mathrm{}M_n^2)\phi _n}.`$ (28) From our point of view, the action should be described by using the $`i`$-th brane coordinate $`x_i^\mu =e^{\sigma _i}x^\mu `$. Then, the four-dimensional volume element $`d^4x`$ and the differential operator $`\mathrm{}`$ are replaced by $`d^4x_i=d^4xe^{4\sigma _i}`$ and $`\mathrm{}_i=\mathrm{}e^{2\sigma _i}`$, respectively: $`S_{\mathrm{scalar}}`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{n0}{}}{\displaystyle d^4x_i\phi _n^{}\left(\mathrm{}_ie^{2\sigma _i}M_n^2\right)\phi _n^{}}.`$ (29) We find that canonically-normalized fields in four dimensions are $`\phi _n^{}(x_i)=\phi _n(x)`$, and the physical masses of the KK modes are given by $`M_{n(i)}^2e^{2\sigma _i}M_{n(i)}^2.`$ (30) To clarify the physical meaning of (30), let us rewrite (26) as $`{\displaystyle \frac{d}{dy}}\left(e^{4ky}{\displaystyle \frac{d}{dy}}\right)\chi _n=\left(M_n^2e^{2\sigma _0}\right)e^{2ky}\chi _nm_n^2e^{2ky}\chi _n.`$ (31) With $`m_n^2`$ regarded as an eigenvalue, this equation is independent of $`\sigma _0`$. This implies that $`m_n^2`$ depends on the parameters $`k`$ and $`r_c`$, but not on $`\sigma _0`$; $`m_n^2=m_n^2(k,r_c)`$. Therefore the $`\sigma _0`$ dependence cancels out in Eq. (30) as $`M_{n(i)}^{}=M_ne^{\sigma _i}=m_ne^{\left(\sigma _i\sigma _0\right)}.`$ (32) The mass spectrum of the KK modes is determined by the boundary condition at $`y=\pi r_c`$, $`\left(d/dy\right)\chi _n\left(y=\pi r_c\right)=0`$. In particular, we are interested in the mass scale $`M_{n(1)}^{}`$, as measured on the visible brane. To this end, note that Eq. (31) is precisely the same equation that treated in Ref., where it was shown that $`m_nke^{\pi kr_c}=ke^{\left(\sigma _1\sigma _0\right)}.`$ (33) Therefore the mass scale of the KK modes is estimated as $`M_{n(1)}^{}kMM_{\mathrm{Pl}(1)}.`$ (34) We thus confirm that the mass scale of the KK modes is significantly smaller than the Planck scale $`10^{18}`$ GeV. We remark, however, that our interpretations is quite different from the usual one, $`m_nkMM_{\mathrm{Pl}}.`$ (35) ## 5 Conclusions The most important point in our treatment is to make the brane coordinate transformation, by which the induced metric on each brane becomes the Minkowski metric $`\eta _{\mu \nu }`$. When measured with such brane coordinates, the four-dimensional Planck mass becomes different on each brane. We found that the four-dimensional Planck mass, when calculated by this procedure, does not depend on the integration constant $`\sigma _0`$. We showed that the masses of the KK modes are also independent of $`\sigma _0`$ for a massless bulk scalar. This holds for any kinds of fields. Moreover, the exchanging-symmetry is manifest in our expression for the Planck masses. On the brane with negative tension, the Planck mass is always much larger than on the brane with positive tension. This fact is interpreted as reflecting the smallness of the gravitational coupling constant, that is, the small overlap of the graviton with the brane with negative tension. When we identify the brane of negative tension with the visible brane, the fundamental scale $`M`$ can be significantly smaller than the Planck mass, and the masses of the KK modes are of the same order as $`M`$ in the massless bulk scalar. We can summarize the above statement as follows. When we estimate the values of physical quantities, we must multiply by the warp factor corresponding to their mass dimension to the values in the bulk. This rule is not exceptional to the Planck mass. The most striking result is that the fundamental scale $`M`$ in the five-dimensional theory can be significantly smaller than that was supposed so far. For instance, it can lie in the TeV region. Given this, it will be interesting to find direct evidence for the extra dimension in the Randall-Sundrum-type scenario. High-energy accelerator experiments in the near future might directly prove the existence of the extra dimension. ## Acknowledgements We are grateful to H. Nakano, A. Kageyama and T. Hirayama for useful discussions and suggestions. We also thank K.-I. Izawa for helpful comments.
no-problem/9912/cond-mat9912202.html
ar5iv
text
# Total energy density as an interpretative tool ## I Acknowledgment This work was initiated at the Aspen center of Physics and carried forward at the Department of Physics of Rutgers University (New Brunswick). D.F. and K.B. were supported by a grant from Research Corporation, and the National Science Foundation under grant number We thank Eberhard Engel for the use of his atomic OEP code.
no-problem/9912/astro-ph9912176.html
ar5iv
text
# The bulge globular cluster NGC 6553: Observations with HST’s WFPC2, STIS and NICMOS ## 1. Introduction The Galactic bulge globular cluster NGC 6553 was observed as part of a large HST project to study the formation and evolution of rich star clusters in the Large Magellanic Cloud. A total of 95 orbits were awarded to this project in Cycle 7, using WFPC2, STIS (in imaging mode), and NICMOS to obtain multi-wavelength photometry in eight clusters with ages $`10^7,10^8,10^9`$ and $`10^{10}`$ years. Data acquisition was completed in November 1998. Early results have been presented by Beaulieu et al. (1998), Elson et al. (1998c), and Johnson et al. (1998,1999). The results of a pilot study of one of the clusters in the sample are described in Elson et al. (1998a,b). NGC 6553 was observed for calibration purposes, in order to transform NICMOS and STIS magnitudes directly into absolute magnitudes on a standard system. Beyond calibration, our data allow us to investigate various properties of the cluster. Clusters like NGC 6553 are of particular interest in that they serve as tracers of the formation and evolution of the bulge component of the Milky Way. In this regard, any spreads in age or metallicity among bulge clusters provide clues concerning the enrichment history of the bulge, the timescale for its formation, and the time of its formation relative to the halo and disk. Their dynamically vulnerable location also allows the effect of tidal shocking on the structure and stellar content of the clusters to be explored. Our WFPC2 data allow us to probe the stellar content of NGC 6553 well below its main-sequence turnoff, and determine accurate values of the cluster’s distance and reddening. Our STIS data allow us to determine a deep luminosity function (LF) which may be compared with LFs in other Galactic globular clusters. We are investigating differential reddening as the explanation for all the apparent peculiarities of this cluster. The peculiarities include the red giant branch (RGB) bump, the tilted horizontal branch (HB), the apparent ”second turnoff” and the unusual LF consistent with these studies. ## 2. WFPC2 CMDs The main features of the colour-magnitude diagrams (CMDs) include the blue population brightwards of the main-sequence turnoff, the tilted HB and the clump below the HB. A 12 Gyr isochrone for \[Fe/H\]$`=0.4`$, reddening $`\mathrm{E}(\mathrm{B}\mathrm{V})=0.7`$, and distance modulus $`(\mathrm{m}\mathrm{M})_0=13.7`$ fit best the data (Fig 1). Also visible in the CMDs of WFPC2 is an apparent ”second turnoff”, an excess of stars near $`(\mathrm{V},\mathrm{V}\mathrm{I})=(21,2)`$. The second fainter turnoff is particularly prominent in chips WF4 and fainter in the PC and WF2 (where WF3 is the cluster core). It is not understood yet what this turnoff represents, it could be associated with the Galactic bulge, or could be the result of patchy reddening (see next subsection). ### 2.1. A tilted horizontal branch It has been noted by previous authors that the HB in NGC 6553 is far from horizontal. Its range in V spans $`0.5`$ mag. Some authors have attributed this tilt to differential reddening across the face of the cluster (cf. Guarnieri et al. 1998). Others suggest that metal line blanketing can produce a tilting of the HB (cf. Ortolani et al. 1990). ### 2.2. RGB bump The HB shows up prominently, peaking at $`\mathrm{V}_{555}=16.6`$ (Fig 1). A second peak is clearly visible $`0.9`$ magnitudes below the HB. This has been discussed, for example, by Sagar et al. (1999) and attributed to a phase of stellar evolution where the star becomes fainter. Other authors have suggested that it could also be due to the superposition on the cluster RGB of background HB stars, however this is not consistent with the redward shift of the background RGB. ## 3. STIS-LP LF Fig 2 shows the STIS-LP LF for NGC 6553, both raw and corrected for incompleteness. In the bottom panel, the instrumental magnitude has been transformed to an absolute magnitude in the STIS-LP passband, adopting an aperture correction of $`0.5`$ mag, a zero point of 23.4 mag, a reddening of $`\mathrm{E}(\mathrm{B}\mathrm{V})=0.7`$ and an absolute distance modulus of $`(\mathrm{m}\mathrm{M})_0=13.7`$. The reddening and distance information are from Guarnieri et al. (1998). The value of $`\mathrm{E}(\mathrm{B}\mathrm{V})`$ has been transformed to an absorption in the STIS-LP passband by $`\mathrm{A}_{\mathrm{R}(\mathrm{STIS})}=2.505\mathrm{E}(\mathrm{B}\mathrm{V})`$. This LF differs from that of most globular clusters (cf. Elson et al. 1998c), possibly because of significant uncorrected extinction. ## 4. NICMOS2 CMD Fig 3 shows the CMD of $`\mathrm{V}_{555}`$ vs ($`\mathrm{V}_{555}\mathrm{H}_{160}`$) for the short exposure. A 12 Gyr isochrone for \[Fe/H\]$`=0.4`$, reddening $`\mathrm{E}(\mathrm{B}\mathrm{V})=0.7`$ and distance modulus $`(\mathrm{m}\mathrm{M})_0=13.7`$ is overlayed but clearly does not represent the data. The considerable width of the main sequence, though less than in Fig 1, suggests important patchy extinction which may also be responsable for the difficulty in fitting isochrones. ## References Beaulieu, S.F., Elson, R., Gilmore, G., Johnson, R.A., Tanvir, N., Santiago, B. 1998, in IAU Symp. 190, New Views of the Magellanic Clouds, ed. xx (Dordrecht: Reidel), in press Bertelli, G., Bressan, A., Chiosi, C., Fagotto, F., & Nasi, E. 1994, A&AS, 106, 275 Elson, R. A. W. E., Sigurdsson, S., Davies, M., Hurley, J., & Gilmore, G. 1998a, MNRAS, 300, 857 Elson, R. A. W. E., Sigurdsson, S., Hurley, J., Davies, M.B., & Gilmore, G. 1998b, ApJL, 499, 53 Elson R., Tanvir N., Gilmore G., Johnson R. A. & Beaulieu S. F. 1998c, in IAU Symp. 190, New Views of the Magellanic Clouds, ed. xx (Dordrecht: Reidel), in press Ferraro F. et al. 1997, A&A, 324, 915 Guarnieri M. D., Ortolani S., Montegriffo P., Renzini A., Barbuy B., Bica E. & Moneti A. 1998, A & A, 331, 70 Johnson R. A., Elson R. A. W., Gilmore G., Beaulieu S. & Tanvir N.R. 1998, in ESO Conf. & Workshop Proc., 55, NICMOS and the VLT, ed. W. Freudling & R. Hook, 140 Johnson R. A., Beaulieu S. F., Elson R. A. W., Gilmore G., Tanvir N. & Santiago B. 1999, in the second ”Three-Islands” conference, ed. , Stellar Clusters and Associations: Convection, Rotation and Dynamos., in press Ortolani S., Barbuy B. & Bica E. 1990, A&A, 236, 362 Sagar R., Subramaniam A., Richtler T. & Grebel E. K. 1999, A&AS, 135, 391 Worthey, G. A. 1998, priv. comm.
no-problem/9912/nucl-th9912050.html
ar5iv
text
# Solutions of the Wick-Cutkosky model in the Light Front Dynamics ## 1 Introduction Light Front Dynamics (LFD) is a field theoretically inspired hamiltonian approach specially well adapted for describing relativistic composite systems. First suggested by Dirac it has been since widely developed (see and references therein) and recently applied with success in its explicitly covariant version to the high momentum processes measured in TJNAF . In this approach the state vector is defined on a space-time hyperplane given by $`\omega x=\sigma `$ where $`\omega =(1,\widehat{n})`$ is a light-like four-vector. We present here the first results obtained within this approach for the Wick-Cutkosky model . This model describes the dynamics of two identical scalar particles of mass $`m`$ interacting by the exchange of a massless scalar particle. This first step towards more realistic systems constitutes an instructive case and is presently considered by several authors . The model has been extended to the case where the exchanged particle has non zero mass $`\mu `$ and used to build a relativistic scalar model for deuteron. The results presented in this paper concern the S-wave bound states in the ladder approximation. They are aimed to (i) compare the LFD and Bethe-Salpeter descriptions and study their non relativistic limits, (ii) disentangle the origin of the different relativistic effects, (iii) evaluate the contribution of higher Fock components and (iv) apply this study to a scalar model for deuteron. ## 2 Equation for Wick-Cutkosky model We have considered the following lagrangian density: $$=\frac{1}{2}\left(_\nu \varphi ^\nu \varphi m^2\varphi ^2\right)+\frac{1}{2}\left(_\nu \chi ^\nu \chi \mu ^2\chi ^2\right)g\varphi ^2\chi $$ where $`\varphi `$ and $`\chi `$ are real fields. In the case $`\mu =0`$ it corresponds to the Wick-Cutkosky model. The wave function $`\mathrm{\Psi }`$, describing a bound state of two particles with momenta $`k_1`$ and $`k_2`$, satisfies in the Light-Front the dynamical equation $$[4(q^2+m^2)M^2]\mathrm{\Psi }(\stackrel{}{q},\widehat{n})=\frac{m^2}{2\pi ^3}\frac{d^3q^{}}{\epsilon _q^{}}V(\stackrel{}{q},\stackrel{}{q}^{},\widehat{n},M^2)\mathrm{\Psi }(\stackrel{}{q}^{},\widehat{n})$$ (1) Variable $`\stackrel{}{q}`$ is the momentum of one of the particles in the reference system where $`\stackrel{}{k_1}+\stackrel{}{k_2}=0`$, and tends in the non relativitic limit to the usual center of mass momentum. M represents the total mass of the composite system, $`B=2mM`$ denotes its binding energy and $`\epsilon _q=\sqrt{q^2+m^2}`$. In the case of S-waves the wavefunction is a scalar quantity depending only on scalars $`q`$ and $`\widehat{n}\stackrel{}{q}`$ . The interaction kernel $`V`$ calculated in the ladder approximation is given by $$V(\stackrel{}{q},\stackrel{}{q}^{},\widehat{n},M^2)=\frac{4\pi \alpha }{Q^2+\mu ^2}$$ (2) with $`Q^2`$ $`=`$ $`(\stackrel{}{q}\stackrel{}{q^{}})^2(\widehat{n}\stackrel{}{q})(\widehat{n}\stackrel{}{q^{}}){\displaystyle \frac{(\epsilon _q^{}\epsilon _q)^2}{\epsilon _q^{}\epsilon _q}}+\left(\epsilon _q^2+\epsilon _q^{}^2{\displaystyle \frac{M^2}{2}}\right)\left|{\displaystyle \frac{\widehat{n}\stackrel{}{q^{}}}{\epsilon _q^{}}}{\displaystyle \frac{\widehat{n}\stackrel{}{q}}{\epsilon _q}}\right|`$ The coupling parameter $`\alpha `$ is defined by $`\alpha =g^2/16\pi m^2`$. In the limit $`\epsilon _b<<2m`$ some analytical solutions are known and once removed the $`\widehat{n}`$ dependence in the kernel – formally setting $`\widehat{n}=0`$ – the LFD equation turns back to the Schrödinger equation in the momentum space for the Yukawa or Coulomb potential, $`\alpha `$ being the usual fine structure constant. Equation (1) has been solved with the coordinate choice displayed in figure 1. We have chosen $`z`$ axis along $`\widehat{n}`$ and with no loss of generality $`\phi =0`$. The $`\phi ^{}`$ dependence of the kernel (2) can be performed analytically and (1) turns into the two dimensional integral equation $$[4(q^2+m^2)M^2]\mathrm{\Psi }(q,\theta )=\frac{4m^2\alpha }{\pi }\frac{q^2}{\epsilon _q^{}}𝑑q^{}\mathrm{sin}\theta ^{}d\theta ^{}\frac{1}{\sqrt{a^2b^2}}\mathrm{\Psi }(q^{},\theta ^{})$$ (3) with $`a`$ $`=`$ $`q^2+q^2qq^{}\left(2\mathrm{cos}\theta \mathrm{cos}\theta ^{}+{\displaystyle \frac{(\epsilon _q^{}\epsilon _q)^2}{\epsilon _q\epsilon _q^{}}}\right)`$ $`+`$ $`\left(q^2+q^2+2m^2{\displaystyle \frac{M^2}{2}}\right)\left|{\displaystyle \frac{q^{}\mathrm{cos}\theta ^{}}{\epsilon _q^{}}}{\displaystyle \frac{q\mathrm{cos}\theta }{\epsilon _q}}\right|+\mu ^2`$ $`b`$ $`=`$ $`2qq^{}\text{sin}\theta \text{sin}\theta ^{}`$ The kernel of (3) has an integrable singularity for $`(q,\theta )=(q^{},\theta ^{})`$. The equation is solved by expanding the solution on a spline functions basis $`S_i`$, associated with coordinates $`q`$ and $`\theta `$: $`\mathrm{\Psi }(q,\theta )=_{i,j}c_{ij}S_i(q)S_j(\theta )`$. The r.h.s. two-dimensional integral is evaluated using Gauss quadrature method adapted to treat the singularity. The unknowns of the problem are the coefficients $`c_{ij}`$, which are solutions of a generalized eigeinvalue problem $`\lambda BC=A(M^2)C`$ for $`M^2`$ values such that $`\lambda (M^2)=1`$. ## 3 Results The LFD binding energy for $`\mu =0`$ versus the coupling constant is displayed in figure 2 (solid line). It is compared with the non relativistic values (dot-dashed line) and a first order perturbative calculation $`B_{pert}`$ (dashed line), valid also for Bethe-Salpeter (BS) equation , given by $$B_{pert}=\frac{m\alpha ^2}{4}\left(1+\frac{4}{\pi }\alpha \mathrm{log}\alpha \right)$$ (4) Corresponding numerical values – in $`\mathrm{}=c=m=1`$ units – are given in table 1 together with the quantity $`R=\frac{<q^2>}{m^2}`$ usually used to evaluate the relativistic character of a system. A first sight at this figure shows a significant departure from the non relativistic results already for $`\alpha =0.1`$. This discrepancy – which keeps increasing till $`B`$ reaches the maximum value of $`2m`$ – is of 100% for $`\alpha =0.3`$ whereas $`R`$ remains very small. When evaluated using non relativistic solutions, $`R`$ is equal to $`B`$ (virial theorem), what gives $`R2\%`$ for $`\alpha =0.3`$, in contrast with the 100% effect in the binding. The $`R`$ values obtained using the LFD solutions are even smaller (see table 1). It is worth noticing the sizeable relativistic effects observed in a system for which both the binding energy and the average momentum are small. A good agreement with the perturbative calculation is found up to values $`\alpha 0.3`$ where the relative differences are $`3\%`$. The particular form of equation (4) ensures the existence of a non relativistic limit, the same for LFD and BS approaches, for the weakly bound states. We will see later on, that this situation is particular to the case $`\mu =0`$. The bound-state wavefunctions presented below are normalized according to $$\frac{m}{(2\pi )^3}\mathrm{\Psi }(q,\theta )^2\frac{d^3q}{\epsilon _q}=1$$ (5) with $`\epsilon _q=m`$ for the non relativistic case. The LFD wave function $`\mathrm{\Psi }(q,\theta =0)`$ obtained for $`\alpha =0.5`$ is compared in figure 3a (solid line) with the corresponding non relativistic solution (dot-dashed line), that is Coulomb wave function. The sizeable difference between both functions is mainly due the differences in their binding energies: $`B_{LFD}=0.0267`$ whereas $`B_{NR}=0.0625`$ for the same coupling constant. In order to compare wave functions with the same energy, the value of the coupling constant for the non relativistic solution is adjusted to $`\alpha _{NR}=0.327`$. The wave function obtained (long-dashed curve) is then much closer to the relativistic one. Furthermore, in the region of high momentum transfer, the relativistic function is smaller than the Coulomb one, as expected from the natural cut-off of high momentum components introduced by relativity. However, these differences can be accounted with the $`\theta `$ dependence of the LFD solutions, which exists even for S-waves. This angular dependence, normalized by the value of $`\mathrm{\Psi }(q,\theta =0)`$, is shown in figure 3b for different values of momentum $`0q1.5`$. As one can see, the influence of the momentum orientation compared to the light-front plane is far from being negligible. This effect increases with $`q`$ and, for a fixed value of the momentum, is maximum when $`\widehat{q}\widehat{n}=0`$. For this kinematical configuration, i.e. relative momentum in the Light-Front plane, the relativistic wave function $`\mathrm{\Psi }(q,\theta =90^o)`$ at high momentum is even found to be bigger than non relativistic one. To get rid of this dependence, we compared $`|\mathrm{\Psi }(q,\theta )|^2`$ integrated over the $`\theta `$ degree of freedom, both for relativistic and non relativistic solutions. The resulting functions, displayed in figure 4, measure the effective relativistic effects in the wavefunctions. At $`q=0`$ they remain at the level of $`5\%`$, once the energy is readjusted. In the high momentum region the relativistic solution is – as expected – smaller than the non relativistic one, but their differences reach a factor three at $`q=2`$, and this for a moderate value of the coupling constant $`\alpha =0.5`$. In the case $`\mu 0`$ there exists a critical $`\alpha _0`$ below which there is no bound solution. Figure 6 represents the binding energy $`B`$ as a function of the coupling constant $`\alpha `$ for different values of $`\mu `$. They are compared with those provided by BS equation in the same ladder approximation, whose kernel incorporates higher order intermediate states. We have solved this equation using the method described in . A first remark in comparing both approaches is that their results are seen to be close to each other. This fact is far from being obvious – specially for large values of coupling constant – due to the differences in their ladder kernel. A quantitative estimation of their spread can be given by looking into an horizontal cut of figure 6, i.e. calculating the relative difference in the coupling constant $`(\alpha _{LFD}\alpha _{BS})/\alpha _{LFD}`$ for a fixed value of the binding energy. The results, displayed in figure 6 for $`B=1.0,0.1,0.01`$, show that relative differences (i) are decreasing functions of $`\mu `$ for all values of $`B`$ (ii) increase with $`B`$ but are limited to 10% for the strong binding case $`B=m`$ which involves values of $`\alpha 5`$. This indicates the relatively weak importance of including higher Fock components in the ladder kernel even for strong couplings, as already discussed in . It is interesting to study the weak binding limit of both relativistic approaches and compare them with the non relativistic calculations in the case $`\mu 0`$. The results are given in figure 7a for $`\mu =1`$. They show on one hand that LFD and BS (solid lines) converge towards very close, though slightly different, values of the coupling constant ($`\frac{\mathrm{\Delta }\alpha }{\alpha }0.01`$). On the other hand one can see, contrary to the $`\mu =0`$ case in figure 2, a dramatic departure of both relativistic approaches from a non relativistic theory (dot-dashed line), even for negligibles values of binding energy. The differences increase with $`\mu `$ as shown in figure 7b in which LFD and BS results are not distinguished. The origin of this departure lies in the fact that the integral term in equation (1) is dominated by the region $`q^{}\mu `$, even for very small values of B, and for the case $`\mu m`$ the $`\frac{q^{}}{m}`$ terms – which make the difference between the non relativistic and relativistic kernels – are not longer negligible. We conclude from that to the non adequacy of a non relativistic treatment in describing systems interacting via massive fields, what is the case of all the strong interaction physics when not described via gluon exchange. Some approximations of equation (1) have been studied in order to disentangle the different contributions to the relativistic energies $`B(\alpha )`$ (see figure 8). Equation (1) is formally written $`K\mathrm{\Psi }=\frac{1}{\epsilon _q}V\mathrm{\Psi }`$. We first consider the case of a non relativistic kernel $`V`$ – i.e. a Yukawa potential – and $`\epsilon _q=m`$, with in curve $`a`$ the non relativistic kinematics $`K=4q^2+2mB`$ and in curve $`a^{}`$ the relativistic one $`K=4(q^2+m^2)M^2`$. Curves $`b`$ and $`b^{}`$ are obtained in the same manner, but putting $`\epsilon _q=\sqrt{q^2+m^2}`$. The last one corresponds to the LFD equation. The results in figure 8 show that the kinematical term $`K`$ has a very small influence on $`B(\alpha )`$, whereas the contributions of $`\epsilon _q`$ and $`V`$ to the total binding are both essential. We conclude from this study that the kinematical corrections alone, as they are performed e.g. in minimal relativity calculations, are not representative of relativistic effects. Even by including them in the kernel through $`\epsilon _q`$ the results obtained are wrong by a factor 2. In case of an energy dependent kernels the normalization condition (5) is only approximate. This energy dependence denotes coupling to higher Fock components and the correct normalization condition for the model considered reads $`N^{(2)}+N^{(3)}=1`$ where $`N^{(3)}`$ is the norm contribution from the three-body Fock component. Using (5) only the two-body part is included. One can show that the correction $`N^{(3)}`$ to the two-body normalization condition is given by: $$N^{(3)}=\frac{4m^2}{(2\pi )^6}\frac{d^3q}{\epsilon _q}\frac{d^3q^{}}{\epsilon _q^{}}\mathrm{\Psi }^{}(q^{},\stackrel{}{q}^{}\widehat{n})\frac{V}{M^2}(\stackrel{}{q},\stackrel{}{q}^{},\widehat{n},M^2)\mathrm{\Psi }(q,\stackrel{}{q}\widehat{n})$$ (6) This expression can be analytically integrated over two angles $`\phi `$ and $`\phi ^{}`$ and we are let with a four dimensional integration. The three-body correction to the norm, i.e. the ratio $`\frac{N_{(3)}}{N_{(2)}+N_{(3)}}`$, as a function of the coupling constant is shown in figure 9 for the case $`\mu =0.15`$. We remark that this correction is not zero at the critical value $`\alpha =0.35`$ corresponding to the $`B=0`$ threshold. Its behaviour in the region of large coupling tends asymptotically towards a value non exceeding $`30\%`$. This is in contrast with the evolution of the parameter $`R`$ introduced in the same figure to estimate the norm correction for a system with a given value of $`R`$. For deuteron, e.g., one has $`R1\%`$ and the expected normalization corrections are of the order of $`4\%`$. ## 4 A scalar model for Deuteron A simple relativistic model for deuteron in the LFD is obtained by adding to the interaction kernel (2) a repulsive part exactly analogous except for the sign of the coupling constant. Even if this procedure is no longer based on field theory – a scalar exchange cannot produce a repulsive interaction – the potential obtained constitutes a LFD relativistic version of the Malfliet and Tjon NN potential on the form: $$V(\stackrel{}{q},\stackrel{}{q}^{},\widehat{n},M^2)=\frac{V_R}{Q^2+\mu _R^2}\frac{V_A}{Q^2+\mu _A^2}$$ (7) The non relativistic model corresponds to $`Q^2=(\stackrel{}{q}\stackrel{}{q}^{})^2`$ and for the $`{}_{}{}^{3}S_{1}^{}`$ state the parameter set $`V_R`$=7.29, $`V_A`$=3.18, $`\mu _A`$=0.314 GeV, $`\mu _R`$=$`2\mu _A`$ inserted in Schrödinger equation ensures a deuteron binding energy $`B=2.23`$ MeV. By solving the LFD equation (1) with potential (7) we can estimate the modification in the deuteron description due to a fully relativistic treatment. The first result concerns its binding energy which becomes $`B=0.96`$ MeV. The inclusion of relativity produces thus a dramatic repulsive effect, already drawn in . We emphasize that, as mentioned before in the case of Wick-Cutkosky model, the use of relativistic kinematics alone induces a very small change in the binding energy. The sizeable energy decrease is almost entirely due to the r.h.s. part of (1). To obtain a proper deuteron description in a relativistic frame it is necessary to readjust the parameters of the non relativistic model. A binding energy of 2.23 MeV is recovered with a repulsive coupling constant $`\mathrm{\Lambda }_R=6.60`$ MeV – all other parameters being unchanged – what represents an decrease of 10% with respect to its original value. Another possibility is to increase the attractive coupling constant up to $`V_A=3.37`$. The relativistic effects in deuteron wave function depend sensibly on the way the energy is readjusted as well as on the relative angle $`\theta `$ between $`\widehat{n}`$ and the momentum $`\stackrel{}{q}`$. For instance when modyfing $`V_A`$ and for the value $`\theta =0`$, the zero of the relativistic wave function is shifted by $``$ 0.1 GeV/c towards smaller values of $`q`$ and the differences in the momentum region of $`q=1.5`$ GeV/c are 50% in amplitude. ## 5 Conclusion We have obtained the solutions for a scalar model in the Light Front Dynamics framework and in the ladder approximation. The results presented here concern the S-wave bound states. We have found that the inclusion of relativity has a dramatic repulsive effect on binding energies even for systems with very small $`\frac{<q>}{m}`$ values. The effect is specially relevant when using a scalar model for deuteron: its binding energy is shifted from 2.23 MeV down to 0.96 MeV. This can be corrected by decreasing of 10% the repulsive coupling constant, what indicates the difficulty in determining beyond this accuracy the value of strong coupling constants within a non relativistic framework. Light-Front wave functions strongly differ from their non relativistic counterparts if they are calculated using the same values of the coupling constant. Once the interaction parameters are readjusted to get the same binding energy both solutions become closer but their differences are still sizeable. The relativistic effets are shown to be induced mainly by the relativistic terms of the kernel. The relativistic kinematics has only a small influence on the binding energy; furthermore, its effect is attractive whereas the total relativistic one is strongly repulsive. The normalization corrections due to the three-body Fock components increase rapidly for small values of the coupling constant and saturate at $`25\%`$ in the ultra relativistic region. They have been estimated to $`4\%`$ in the deuteron. The LFD results are very close to those provided by Bethe-Salpeter equation for a wide range of coupling constant despite the different physical input in their ladder kernel. However in the case of systems interacting via a massive exchanged field, they both strongly differ from the non relativistic solutions even in the zero binding limit. This leads to the conclusion that such systems cannot be properly described by using a non relativistic dynamics. The case of higher angular momentum states for scalar particles requires more formal developments and is presented in a forthcoming publication . We are grateful to V.A. Karmanov for enlightening discussions during this work and for a careful reading of the manuscript. We thank L. Theussl for providing some results from prior to its publication
no-problem/9912/astro-ph9912541.html
ar5iv
text
# The Tree–Particle–Mesh N-body Gravity Solver ## 1 Introduction In addition to the rapid increase of available computing power, the rise of the use of N-body simulations in astrophysics has been driven by the development of more efficient algorithms for evaluating the gravitational potential. Efficient algorithms with better scaling than $`N^2`$ take two general forms. First, one can introduce a rectilinear spatial grid and, taking advantage of Fast Fourier Transforms, solve Poisson’s equation on this grid in Fourier space— the well-known Particle–Mesh (PM) method, which, while very fast, limits the spatial resolution to the grid spacing. To gain finer resolution one can introduce smaller subgrids (e.g. the ART code of Kravtsov et al. (1997); see also Norman & Bryan (1999)); alternatively one can compute the short-range interactions directly (the Particle–Particle–Particle–Mesh, or P<sup>3</sup>M method (Efstathiou et al. (1985); Ferrell & Bertschinger (1994))). One widely used code (AP<sup>3</sup>M) combines both of these refinements (Couchman (1991); Pearce & Couchman (1997)). The second general approach is to approximate long-range interactions which are less important to an accurate determination of the force, by grouping together distant particles. These are known as Tree methods since a tree data structure is used to hold the moments of the mass distribution in nested subvolumes (Barnes & Hut (1986); Hernquist (1987)). ART and AP<sup>3</sup>M are discussed in more detail by by Knebe et al. (2000); for a review of the field see Couchman (1997). All of these algorithms are more difficult to implement on parallel computers with distributed memory than on single processor machines. Gravity acts over long scales and gravitational collapse creates highly inhomogeneous spatial distributions, yet with parallel computers one needs to limit the amount of communication and give different processors roughly equal computing loads. The problem is one of domain decomposition— locating spatially compact regions and deciding which data is needed to find the potential within that region. Xu (1995) introduced a new N-body gravity solver which deals with this problem in a natural way. The Tree–Particle–Mesh (TPM) approach is similar to the P<sup>3</sup>M method, in that the the long-range force is handled by a PM code and the short-range force is handled by a different method— in this case using a tree code, with the key difference that the tree code is used in adaptively determined regions of arbitrary geometry. In this paper we describe several improvements to the TPM code, and compare the results with those obtained by the P<sup>3</sup>M method. Our goal was to improve and to test the new algorithm while designing an implementation that could be parallelized efficiently and was optimal for use as a coarse grained method suitable for distributed computational architectures, including those having large latency. Section 2 describes the method, Section 3 the basis (density threshold) for domain decomposition, Section 4 the parallelism of the implemented algorithm (using message passing), and Section 5 tests and compares with the well calibrated P<sup>3</sup>M algorithm. The implementation presented in this paper is oriented towards a specific cosmological problem– the formation of large clusters– and we will be discussing it in that setting. However, this algorithm could be used for many particle simulation applications, both in astrophysics and other fields; it should be beneficial in situations where the density distribution allows one to divide the particles into many isolated groups. Thus we will conclude this section with a brief summary of the specific cosmological context for those unfamiliar with it. A large cubical volume is simulated with periodic boundary conditions. The simulation begins in the linear regime; particles are displaced slightly from a uniform grid, giving Gaussian perturbations to a nearly constant density field. The particles are followed as they move under their mutual gravitational attraction. Over time, gravitational instability causes the initially small overdensities to collapse, forming highly dense halos (with central densities a factor of $`10^5`$ higher than the average). These halos are distributed along filaments surrounding large, low-density voids. The TPM algorithm was developed to deal with this highly inhomogeneous structure. ## 2 The TPM algorithm The basic idea behind the TPM algorithm is to identify dense regions and use a tree code to evolve them; low density regions and all long–range interactions are handled by a PM code. A general outline of the algorithm is: 1. Find the total density on a grid. 2. Based on the grid density, decompose the volume into a background PM volume and a large number of isolated high density regions. Every particle is then assigned to either the PM background or a specific tree. 3. Integrate the motion of the PM particles (those not in any tree) using the PM gravitational potential computed on the grid. 4. For each tree in turn integrate the motion of the particles, using a smaller time step if needed; forces internal to the tree are found with a tree algorithm (Hernquist (1987)), added to the tidal forces from the external mass distribution taken from the PM grid. 5. step global time forward, go back to step 1. In this section we will consider certain aspects of this process in detail, and conclude with a more complete outline of the algorithm. ### 2.1 Spatial Decomposition We wish to locate regions of interest which will be treated with greater resolution in both space and time; for the purposes of cosmological structure formation this translates into regions of high density. It also is necessary that these regions remain physically distinct during the long PM time step (determined by the Courant condition) so that the mesh-based code accurately handles interactions between two such regions. The process we use can be thought of as finding regions enclosed by an isodensity contour. If one imagines the isodensity contours through a typical simulation at some density threshold $`\rho _{\mathrm{thr}}>\overline{\rho }`$, space is divided into a large number of typically isolated regions with $`\rho >\rho _{\mathrm{thr}}`$ plus a multiply connected low density background filling most of the volume. To locate isolated, dense regions we begin with the grid density, which has been calculated already by the PM part of the code. Each grid cell which is above a given threshold density $`\rho _{\mathrm{thr}}`$ is identified and given a unique positive integer key (the choice of $`\rho _{\mathrm{thr}}`$ is discussed in Section 3). Cells are then grouped by a ‘friends-of-friends’ approach: for each cell with a nonzero key the 26 neighboring cells are examined and, if two adjacent cells are both above the threshold, they are grouped together by making their keys identical. The end result is isolated groups of cells, each separated from the other groups by at least one cell. If a wider separation between these regions is desired, one can examine a larger number of neighboring cells. The method is “unstructured” in the sense that the geometry of each region is not specified in advance, except insofar as it is singly connected. The shape of the region can be spheroidal, planar, or filamentary as needed. To assign particles to trees, the process used to find the density on the grid (described in the next section) is repeated. This involves locating the grid cell to which some portion of a particle’s mass is to be added, so it is easy to check at the same time whether that cell has a nonzero key and, if it does, to add that particle to the appropriate tree. Thus any particle that contributes mass to a cell with density above the threshold is put in a tree. Because of the spatial separation of the active regions (they are buffered by at least one non-tree cell) a particle will only belong to one tree even though it contributes mass to more than one cell. An example of this in practice is shown in Figure The Tree–Particle–Mesh N-body Gravity Solver. In the bottom panel, all particles in a small piece of a larger simulation are shown in projection. The grid and the location of active cells are shown in the top panel; each isolated region is indicated by a unique numerical key. In a couple of cases it appears that different regions are in adjacent cells, but in fact they are separated in the third dimension– the region shown is 10 cells thick. In the lower of the middle two panels, the particles assigned to trees are shown with different symbols indicating membership in different trees. In the other panel the residual PM particle positions are plotted, demonstrating their much lower density contrast as compared to those in trees. ### 2.2 Force Decomposition As in Xu (1995), the force is decomposed into that which is internal to the tree and that due to all other mass: $$𝐅=𝐅_{\mathrm{internal}}+𝐅_{\mathrm{external}}.$$ (1) However, we do this in a different manner, described in this section, than was done in Xu (1995). The first step in obtaining the particle accelerations is to obtain the PM gravitational potential. The masses $`m_p`$ of the $`N`$ particles (including those in trees) are assigned to the grid cells using CIC weighting: $`\rho _{\mathrm{all}}(i,j,k)={\displaystyle \underset{p=1}{\overset{N}{}}}m_pw_iw_jw_k,`$ (2a) $`w_i=\{\begin{array}{cc}1|x_pi|\hfill & \text{for }|x_pi|<1,\hfill \\ 0\hfill & \text{otherwise,}\hfill \end{array}`$ (2b) where $`x_p`$ is a particle’s $`x`$ coordinate in units where the grid spacing is unity. The potential $`\mathrm{\Phi }_{\mathrm{PM},\mathrm{all}}`$, assuming periodic boundary conditions, is then found by solving Poisson’s equation using the standard FFT technique (Hockney & Eastwood 1981). Once a tree has been identified, we wish to know the forces from all the mass not included in that tree; thus the contribution of the tree itself must be removed from the global potential. This step will have to be done for each tree in turn. The density is found exactly as before, except this time summing over only the particles in the tree: $$\rho _{\mathrm{tree}}(i,j,k)=\underset{tree}{}m_pw_iw_jw_k$$ (3) Using this density, we solve Poisson’s equation again, except that non-periodic boundary conditions are used (Hockney & Eastwood 1981). The resulting potential $`\mathrm{\Phi }_{\mathrm{NP},\mathrm{tree}}`$ is the contribution that the tree made to $`\mathrm{\Phi }_{\mathrm{PM},\mathrm{all}}`$ without counting the ghost images due to the periodic boundary conditions of the latter. The force on a tree particle exerted by all the mass not in the tree (including the periodic copies of the tree) is then $$𝐅_{\mathrm{external}}=\underset{i,j,k}{}w_iw_jw_k\mathrm{\Phi }_{\mathrm{PM},\mathrm{all}}\underset{i,j,k}{}w_iw_jw_k\mathrm{\Phi }_{\mathrm{NP},\mathrm{tree}}$$ (4) Thus tidal forces within a tree region are computed on the mesh scale in a consistent manner, with interpolation used as required to find the forces on individual particles. Calculating the non-periodic potential with FFTs involves using a grid which is eight times larger in volume than that containing the actual mesh of interest, but since trees are compact and isolated regions, the volume of the larger grid which is non-zero is quite small. Thus the FFT which is computed for each tree can be done on a smaller grid as long as the grid spacing remains the same as for the larger periodic FFT; we do this by embedding the irregular tree region in a cubic subgrid, padding with empty cells as needed. The final step is to calculate the internal forces $`𝐅_{\mathrm{internal}}`$ for each tree. We do this with the tree code of Hernquist (1987). Since the periodic nature of the potential was taken into account in finding the external forces, no Ewald summation is needed. Time stepping is handled in the same manner as Xu (1995). That is, the PM potential is determined at the center of the large PM timestep, and each tree has its own, possibly smaller, timestep. There are a couple of slight differences: in Equation 15 of Xu (1995) we use the parameter $`\beta =0.05`$, and we decrease $`\delta t_{TREE}`$ so that 97.5% of the tree particles satisfy $`\delta t_i\delta t_{TREE}`$. ### 2.3 Detailed Outline To sum up this section we give a more detailed outline of the code. All particles begin with the same time step $`\mathrm{\Delta }t=\mathrm{\Delta }t_{PM}`$; the velocities are given at time $`t`$ and the positions at time $`t+\mathrm{\Delta }t/2`$ (as described in Xu 1995). 1. Using the density from the previous step, we identify all particles belonging to trees, and to which tree (if any) each particle belongs (Section 2.1). 2. The time step for each tree is computed, and particle positions are adjusted if $`\mathrm{\Delta }t`$ has changed for that particle (Hernquist & Katz 1989). This can occur if a particle joins or leaves a tree, of if the tree time step has changed. 3. The total density due to all particles at time $`t+\mathrm{\Delta }t_{PM}/2`$ is found on a grid using Equation 2. The potential $`\mathrm{\Phi }_{\mathrm{PM},\mathrm{all}}`$ is found from this density, and the PM acceleration at mid-step is found for each particle. 4. Each tree is then dealt with in turn. First, the tree contribution to the PM acceleration is removed, as described in Section 2.2. Next the tree is stepped forward with a smaller time step using the tree code of Hernquist (1987), with the external forces included. 5. All particles not in trees are stepped forward using the PM acceleration. The global time and cosmological parameters are updated, completing the step. ## 3 The Density Threshold In Section 2.1 the threshold density $`\rho _{\mathrm{thr}}`$ was introduced to demarcate dense regions which would be followed with higher resolution. The best choice of this parameter depends on a number of considerations. One could set $`\rho _{\mathrm{thr}}`$ to be such a low value that nearly all particles are in trees, or that only one large tree exists, thereby destroying the efficiency that the TPM algorithm is designed to give. On the other hand, too high a value would leave many interesting regions computed at the low resolution of the PM code. When modeling gravitational instability, one must also keep in mind that the density evolves from having only small overdensities initially to a state where there are a few regions of very large overdensity; thus the ideal threshold will evolve with time. With these considerations in mind, we base $`\rho _{\mathrm{thr}}`$ on the grid density as: $$\rho _{\mathrm{thr}}=A\overline{\rho }+B\sigma $$ (5) where $`\overline{\rho }`$ is the mean density in a cell, and $`\sigma `$ is the standard deviation of the cell densities. With this equation, the first two moments of the density distribution are used to fix $`\rho _{\mathrm{thr}}`$ in an adaptive manner. The coefficient $`A`$ is set to prevent the selection of too many or too large trees when $`\sigma `$ is small; its value will be near unity. The choice of $`B`$ will determine what fraction of particles will be placed in trees when $`\sigma `$ is large. This choice depends on the parameters of the simulation such as the cosmological model (including the choice of $`\sigma _8`$) and the size of a grid cell. We choose a value of $`B`$ which will place $`1/3`$ of the particles in trees at the end of the simulation. Figure The Tree–Particle–Mesh N-body Gravity Solver shows how tree properties vary over the course of a large LCDM simulation, using $`A=0.9`$ and $`B=4.0`$ in Equation 5. The value of $`\sigma `$ begins at 0.1, so at high redshift $`\rho _{\mathrm{thr}}1.5\overline{\rho }`$. This leads to a large number of trees which are low in mass and diffuse. As time goes on, these slight overdensities collapse and merge together, resulting in denser concentrations of mass. Also, $`\sigma `$ becomes larger (increasing to 4.1 by the end of the simulation), so a larger concentration of mass is needed before a region is identified as a tree. Thus the original distribution of trees evolves into one with fewer trees, but at higher masses (though at any given time the masses of trees roughly follow a power law distribution). The typical volume within tree regions also increases with time, but the total volume covered by trees (measured by the number of cells above $`\rho _{\mathrm{thr}}`$) decreases. Given the roughly log–normal distribution of density resulting from gravitational instability, the total volume in tree regions is less than one percent even when they contain $`30`$% of the mass. The rise in $`\rho _{\mathrm{thr}}`$ means that the size of the smallest tree found also rises– from 4 to 40 particles over the course of this run. This raises an issue that must be noted when understanding the results of a TPM run: the choice of $`\rho _{\mathrm{thr}}`$ introduces a minimum size below which the results are no better than in a PM code. This is discussed in more detail in Section 5. ## 4 Parallelism One of the strengths of the TPM algorithm is that after the PM step, each tree presents a self-contained problem: given the particle positions, velocities, and tidal forces, the tree stepping can be completed without the need to access any other data, since the effect of the outside universe is summarized by the tidal forces in the small tree region. This makes the tree part of the code intrinsically parallel. What makes such a separation possible is that during the multiple timesteps required to integrate particle orbits within a dense tree region the tidal forces may be deemed constant; the code is self-consistent in that the density on the PM grid is only determined on the Courant timescale for that particle distribution. Our parallel implementation of the TPM method uses a distributed memory model and the MPI message passing library, in order to maximize the portability of the code. The PM portion of the code is made parallel in a manner similar to that described in Bode et al. (1996). This scales well, and takes a small fraction of the total time as compared to the tree portion of the code. Two steps are made to ensure load balancing the tree part of the code. First, trees are distributed among processors in a manner intended to equalize the amount of work done. The time it takes for a particular tree to be computed depends on the size of the tree, the cost of computing the force scaling roughly as $`N\mathrm{log}N`$. As trees are assigned to processors, a running tally is kept of the amount of work given to each node, and the largest unassigned tree is assigned to the processor given the least amount of work. The tree particles are then distributed among the processors, and each processor deals with its assigned trees, moving from the largest to the smallest. There is also a dynamic component to the load balancing: when a node has completed all of its assigned trees, it queries another process to see if that one is also finished. If that process still has an uncomputed tree remaining in its own list, it sends all the necessary tree data to the querying node. That node then evolves the tree and sends the final state back to the node that had the tree originally. Thus nodes that finish earlier than expected do not remain idle. The scaling of the code is shown for two different size problems in Figure The Tree–Particle–Mesh N-body Gravity Solver; the times shown are for when the underlying LCDM model is at low redshift (z=0.5), meaning that clustering is significant and calculating tree forces dominates the CPU time. At higher redshift, when the trees are less massive and more diffuse, the timing would be more like that of a PM code (this can be seen from Table 1). These timing tests were run on an SGI Origin 2000 with 250 MHz chips; the scaling on a PC cluster with a fast interconnect was found to be quite similar. The $`512^3`$ model is the one shown in Figure The Tree–Particle–Mesh N-body Gravity Solver; it scales reasonably well up to the largest $`NPE`$ we attempted; compared to $`NPE=32`$, the efficiency is better than 90% at $`NPE=128`$, and 80% at $`NPE=256`$. When using 32 nodes the code required 512 Mbyte per node, so we did not try any smaller runs. The $`256^3`$ times are for the same LCDM model except with a smaller box size (150 Mpc/$`h`$) and $`\rho _{\mathrm{thr}}=0.85\overline{\rho }+4.0\sigma `$. Since the largest nonlinear scale is a larger fraction of the box size, a greater fraction of particles (37%) are placed in trees and the largest tree contains a greater proportion of the mass. This $`256^3`$ model scales extremely well from 4 to 16 processors, but drops to 70% efficiency at 32 nodes, and beyond 64 nodes does not speed up at all. The reason for this is that the largest tree in this simulation contains one percent of all particles, which means this one tree takes a few percent of the entire CPU time devoted to trees. As $`NPE`$ is increased, the time it takes to complete this one tree becomes the major part of the total time. The solution to this problem is to allow more than one processor to work on the same tree, which is quite possible (e.g. Davé et al. 1997 and the references therein; see also Xu 1995). The division of the total time between different components of the code is shown in Table 1 for both low and high redshift. At low redshift the tree calculations dominate the total time (as long as this part of the code is load balanced– the rise in overhead for the $`256^3`$ model when $`NPE32`$ is due to imbalance, as discussed above). At high redshift the trees are smaller, so the overhead related to domain decomposition takes a large fraction of the total time; the main difference between the two redshifts is the rising cost of the tree calculations as trees become more massive and require more timesteps. Comparison with the the P<sup>3</sup>M code of Ferrell & Bertschinger (1994) (made parallel by Frederic (1997)) shows that TPM (with 30% of the particles in trees) takes slightly less time than P<sup>3</sup>M if all the trees keep to the PM time step. Allowing trees to have individual time steps speeds up the TPM code by a factor of three to four. In the present implementation, particles within the same tree all use the same timestep; implementing multiple time steps within trees could further save a significant amount of computer time (roughly another factor of three) without loss of accuracy. The memory per process used by our current implementation is $`20N/NPE`$ reals when there is one cell per particle. This includes for each particle $`\stackrel{}{x},\stackrel{}{v},\stackrel{}{a}`$, and three integer quantities (a particle ID number, a tree membership key, and the number of steps per PM step). The remaining space is used by the mesh part of the code, and reused as temporary storage during the tree stepping. Because the grid density from the previous step is saved, the memory used could be reduced to $`17N/NPE`$ at the cost of computing the density twice per step. The $`1024^3`$ point shown in Figure The Tree–Particle–Mesh N-body Gravity Solver is for the same cosmological model and box size as the $`512^3`$ run, but with eight times as many particles. This run shows the great potential of the TPM algorithm. At lower redshifts over 80% of the computational time is spent finding tree forces— precisely that portion of the code which involves no communication; thus a run of this size would be able to efficiently utilize even more processors. This does not necessarily mean using a larger supercomputer; rather, one could use networked PC’s or workstations. These distributed resources could be used to receive a single tree or small group of trees, do the required time stepping in isolation, and send back the final state. The time required to evolve a single tree varies from less than a second to a couple minutes, so even in situations with a high network latency the cost of message passing need not be prohibitive. ## 5 Tests of the Code To test how the code performs in a standard cosmological simulation we ran both TPM and the P<sup>3</sup>M code of Ferrell & Bertschinger (1994) with the same initial conditions. The test case contains $`128^3`$ particles in a 150 Mpc/$`h`$ box, with a flat LCDM cosmological model close to that proposed by Ostriker & Steinhardt (1995): $`\mathrm{\Omega }_m=0.37`$, $`\mathrm{\Lambda }=0.63`$, $`H_o=70`$ km/s/Mpc, $`\sigma _8=0.8`$, and tilt $`n=0.95`$. The softening length of the particles is $`ϵ=18.31`$ kpc/$`h`$. The number of mesh points in the PM grid was $`256^3`$ for the P<sup>3</sup>M run and $`128^3`$ for TPM. The TPM threshold density was $`\rho _{\mathrm{thr}}=0.85\overline{\rho }+4.0\sigma ^2`$, so a third of the particles were contained in trees by $`z=0`$. In the tree code an opening angle of $`\theta =0.5`$ was used. Figure The Tree–Particle–Mesh N-body Gravity Solver shows projected particle positions at the final redshift $`z=0`$ for a portion of the volume around the largest halo that had formed. One important difference between the two codes can be seen by examining this figure. It is clear that the largest structures are quite similar in both cases; but notice that a number of small halos can be identified in the P<sup>3</sup>M snapshot that are not present in TPM. To verify this visual appearance in a more quantitative manner, bound halos were identified with DENMAX (Gelb & Bertschinger (1994)). The resulting mass functions for the two codes are shown in Figure The Tree–Particle–Mesh N-body Gravity Solver. The agreement is good for trees with more than 100 particles, but the TPM model has fewer small halos with less than 100 particles, confirming the visual impression. The cause of this difference arises from the choice of $`\rho _{\mathrm{thr}}`$. Those objects that collapse early, which through merger and accretion will end up having higher masses, are identified when only slightly overdense and thus are followed at higher resolution throughout their formation. As $`\rho _{\mathrm{thr}}`$ rises, a halo must reach a higher overdensity before being followed with the tree code, so objects which collapse at late times are simulated at lower resolution. In this test case, the smallest tree at $`z=0`$ contains 66 particles, so it is unsurprising that TPM has fewer halos near and below this size. This effect is shown in a different way in Figure The Tree–Particle–Mesh N-body Gravity Solver, where the two-point correlation function $`\xi (r_{12})`$ is shown for the two test runs. For separations $`r_{12}>1`$Mpc there is no discernible difference between the P<sup>3</sup>M and TPM particle correlations. However, when all particles are included in calculating $`\xi `$, the P<sup>3</sup>M code yields a greater correlation at smaller separations. We also selected from each simulation the particles contained in the 1000 largest halos found by DENMAX, and redid the calculation with only those particles. In this case, the TPM correlation function is the same as the P<sup>3</sup>M, and in fact is higher for $`r_{12}<10ϵ`$. This demonstrates clearly that the lower TPM correlation function in the former case is an effect of the higher force resolution of P<sup>3</sup>M in small halos and other regions where $`\rho <\rho _{\mathrm{thr}}`$. Within TPM halos followed as trees, the resolution is as good as (or better than) in P<sup>3</sup>M; the difference in $`\xi `$ computed for halo particles only is most likely due to differences in softening (the tree code uses a spline kernel while P<sup>3</sup>M uses a Plummer law) and in the time stepping. The distribution of velocities is also sensitive to resolution effects. To examine this, particle pairs were divided into 30 logarithmically spaced bins, with bin centers between 50 kpc and 20 Mpc; for each pair the line-of-sight velocity difference $`v_{12}`$ was computed. Histograms showing the distribution of $`v_{12}`$ in selected radial bins are shown in Figure The Tree–Particle–Mesh N-body Gravity Solver. If only particles in the 100 largest halos are considered, the two codes are indistinguishable. But again, a difference becomes noticeable as more particles are included — the P<sup>3</sup>M halos begin to show more pairs with a small velocity difference ($`v_{12}<250`$km/s). Since the P<sup>3</sup>M code is following smaller halos with higher resolution, these halos have smaller cores and a cooler velocity distribution than TPM halos with the same mass. In order to compare the properties of individual collapsed objects, we selected a group of halos as follows. First, we chose those DENMAX halos without a more massive neighbor within 2 Mpc/$`h`$. The spherically averaged density profile $`\rho (r)`$ was found for each halo, and a fit to the NFW profile (Navarro, Frenk & White 1997) was computed by a $`\chi ^2`$ minimization; those with less than 99.5% likelihood were excluded from further analysis. This fitting procedure repositioned the centers onto the densest region of the halo; we removed those halos where the positions found in the two models differed by more than $`r_{200}/3`$, in order to be sure that it is the same halo being examined in both cases. Figure The Tree–Particle–Mesh N-body Gravity Solver shows the $`\rho (r)`$ for a few halos selected in this manner; the agreement is quite good, and within statistically expected fluctuations. If the TPM code had a lower resolution then a broader halo profile with a lower density peak would result, but this is not seen. Comparisons of other derived properties are shown in Figure The Tree–Particle–Mesh N-body Gravity Solver. In each case we plot the fractional difference of the two models: \[f(TPM)-f(P<sup>3</sup>M)\]/0.5\[f(TPM)+f(P<sup>3</sup>M)\]. The top panel shows the number of particles within 1.5 Mpc/$`h`$ of the center, and the second panel shows the velocity dispersion. The agreement in both cases is good– the dispersion is 7% and 9% respectively, with no systematic offset or discernible trend with halo mass. The third panel compares $`r_{200}`$ from the NFW fits, which also agrees quite well, the dispersion being 5%. At the low mass end there are some TPM halos with sizes more than 20% larger, but these are also the ones with the smallest $`r_{200}`$. The final panel compares the core radius $`r_s`$ resulting from the NFW profile fits, which shows the most variation between codes. There are a number of TPM halos with substantially larger cores (particularly at low mass), but the average TPM core size is smaller by 10% than that in P<sup>3</sup>M. It appears that most TPM cores have in general been followed with the same or higher resolution than obtained with the P<sup>3</sup>M code, but a few have not. Examination of those halos with largest differences often show substructure or high ellipticity, but this is not always the case. ## 6 Summary In the current environment, those wishing to carry out high resolution simulations must tailor their approach to exploit parallel and distributed computing architectures. In this paper we have presented an algorithm for evolving cosmological structure formation which is well suited to such machines. By suitable domain decomposition, one large volume is broken up into a large number of smaller regions, each of which can be solved in isolation. This simplifies balancing the load between different processes, and makes it possible to use machines with high latency (e.g. a large number of physically distributed workstations) efficiently. Furthermore, it ensures that higher resolution in both space and time is applied in only those regions which require it. An important parameter in the TPM code is the density threshold. By tying this parameter to the first and second moments of the density distribution, it is possible to follow initially small overdensities as they collapse and thus simulate halo evolution with as high resolution as the more common P<sup>3</sup>M code. However, it is best to consider only those halos which contain twice as many particles as the smallest tree. Recently Bagla (1999) introduced a different method of combining gridded and tree codes called TreePM. This algorithm computes both a PM and a tree force for every particle, which has the advantage of uniform resolution for all particles. The performance of TPM in lower density regions can always be improved by lowering the density threshold, though this may lead to unacceptably large trees. Another possibility which we intend to investigate, is to create a “TP<sup>3</sup>M”, which uses P<sup>3</sup>M rather than PM in the non-tree volume. This could be quite practicable, since the particle-particle interactions are not expensive to compute when the density is low. However, it may be that increased force resolution in low density regions is not a true improvement. Melott et al. (1997) and Splinter et al. (1998) showed that discreteness and two-body scattering effects become problematic when the force resolution outstrips the corresponding mass resolution. This led to a recent investigation by Knebe et al. (2000), who concluded that strong two-body scattering can lead to numerical effects, particularly when the local interparticle separation is large or the time step is too long; slowly moving pairs of particles may suffer interactions which do not conserve energy. The TPM code will be less prone to such effects because low density regions use lower force resolution; only as the local mass resolution increases does the force resolution become higher, and simultaneously the time step will tend to become smaller. This research was supported by NSF Grants AST-9318185 and AST-9803137 (under Subgrant 99-184), and the NCSA Grand Challenge Computation Cosmology Partnership under NSF Cooperative Agreement ACI-9619019, PACI Subaward 766. Many thanks to to Ed Bertschinger for use of his P<sup>3</sup>M code, and Lars Hernquist for supplying a copy of his tree code.
no-problem/9912/astro-ph9912402.html
ar5iv
text
# LOTIS Upper Limits and the Prompt OT from GRB 990123 ## Introduction The ultimate reward for the Gamma-Ray Burst Coordinates Network (GCN) barthelmy98 came when the Robotic Optical Transient Search Experiment (ROTSE) detected prompt optical emission from GRB 990123 akerlof99 . Although this discovery marks another milestone in comprehending the physics of GRBs, bright optical transients (OTs) may be the exception rather than the rule. Both LOTIS and ROTSE have unsuccessfully attempted to detect these predicted flashes on many occasions park97a ; park97b ; williams98 ; williams99 ; schaefer99 ; kehoe99 . Although some of the non-detections may be attributed to large extinction, GRB 990123 demonstrated that the progenitor is not always obscured. ## Observations & Analysis During more than 1100 nights of possible observations (since October 1996), LOTIS has responded to 127 GCN triggers. Of these, 68 triggers were unique GRB events; a rate of approximately one unique GRB event every 16.5 days. The quality of the LOTIS “coverage” for a given event depends on five factors: observing conditions, LOTIS response time, difference between the initial and final coordinates, size of the final error box, and the duration of the GRB. Table 1 lists 13 events for which LOTIS achieved good coverage. First we compare GRB 990123 with the LOTIS upper limits to test whether the flux of the prompt optical emission scales with some gamma-ray property. Here and throughout the analysis we neglect extinction effects. The first row in Table 1 lists the properties of GRB 990123 briggs99 ; akerlof99 . The columns display the UTC date of the burst, the BATSE trigger number, the 64 ms and 1024 ms peak fluxes (50 - 300 keV), and the gamma-ray fluence ($`>`$20 kev) of each event. The last three columns are the scaled magnitudes, $$m_{\mathrm{GRB}}=m_{\mathrm{GRB990123}}2.5\mathrm{log}\left(\frac{X_{\mathrm{GRB}}}{X_{\mathrm{GRB990123}}}\right),$$ (1) where $`m_{\mathrm{GRB990123}}=9.0`$, the peak magnitude of GRB 990123, and $`X_{\mathrm{GRB}}`$ and $`X_{\mathrm{GRB990123}}`$ are the peak flux or fluence values for those events. The LOTIS sensitivity varies depending on observing conditions but in general a conservative limiting magnitude is $`m11.5`$ prior to March 1998 (upgrade to cooled CCD) and $`m14.0`$ following that date. Table 1 shows that the scaled prompt optical emission for both peak flux and fluence is often brighter than the LOTIS upper limits which suggests that these simple relationships are not valid. Briggs et al. briggs99 show that the optical flux measured during GRB 990123 is not consistent with an extrapolation of the burst spectrum to low energies. However Liang et al. liang99 point out that the extrapolated tails rise and fall with the optical flux. A low energy enhancement would produce an upward break which might account for the measured optical flux during GRB 990123. It is important to determine if there is a low energy upturn in the spectrum since it would establish whether or not the optical and gamma-ray photons are produced by the same electron distribution. The LOTIS upper limits can be used to constrain a low energy enhancement assuming it is common to all GRBs. For the events listed in Table 1 we fit the gamma-ray spectra during the LOTIS observations to the Band functional form band93 . In a few cases the low energy extrapolation is near the LOTIS upper limit. The solid line in Figure 1 shows the Band fit to GRB 971006 and its extrapolation to low energies. Fits to the spectra of GRB 990123 during the first (short dash), second (dash-dot), and third (long dash) ROTSE observations and the corresponding ROTSE detections (filled circles) are also shown. The extrapolation of GRB 971006, predicts an $`m12.4`$ optical flash. Even a slight upward break in the spectrum would have produced a detectable OT. We conclude that the LOTIS upper limits support the hypothesis that the low energy emission is produced by a different electron distribution than the high energy emission. Finally we attempt to use the LOTIS upper limits and the external reverse shock model to constrain the physical properties of the GRB blast wave. Sari and Piran sari99 show that the fraction of the energy which gets emitted in the optical band depends on the values of the cooling frequency and the characteristic synchrotron frequency. For the external reverse shock these frequencies are given by $$\nu _c=8.8\times 10^{15}\mathrm{Hz}\left(\frac{ϵ_B}{0.1}\right)^{3/2}E_{52}^{}{}_{}{}^{1/2}n_{1}^{}{}_{}{}^{1}t_{A}^{}{}_{}{}^{1/2},$$ (2) $$\nu _m=1.2\times 10^{14}\mathrm{Hz}\left(\frac{ϵ_e}{0.1}\right)^2\left(\frac{ϵ_B}{0.1}\right)^{1/2}\left(\frac{\gamma _0}{300}\right)^2n_{1}^{}{}_{}{}^{1/2},$$ (3) where $`ϵ_e`$ and $`ϵ_B`$ are the fraction of equipartition energy in the electrons and magnetic field, $`E_{52}`$ is the total energy in units of $`10^{52}`$ erg, $`n_1`$ is the density of circumburster medium in cm<sup>-3</sup>, $`\gamma _0`$ is the initial Lorentz factor, and $`t_A`$ is the duration of the emission in seconds. Sari and Piran assume the frequency dependencies modify the fluence of a moderately strong GRB, i.e. $`10^5`$ erg cm<sup>-2</sup>. In this analysis we compare the afterglow properties of GRB 970508 found by Wijers and Galama wijers99 to those found by Granot et al. granot99 . Therefore we use a fluence of $`3.1\times 10^6`$ erg cm<sup>-2</sup> emitted over the entire LOTIS integration time of $`t_A=10.0`$ s. The index of the electron power-law distribution is set to $`\mathrm{p}=2.2`$. Figure 2 shows contour plots of the predicted magnitude of the prompt OT for GRB 970508 as a function of $`n_1`$ and $`\gamma _0`$. GRB 970508 could not be observed by LOTIS or ROTSE since it occurred during the day. Values of $`E_{52}=3.5`$, $`ϵ_e=0.12`$, and $`ϵ_B=0.089`$ from Wijers and Galama are used in the left panel and values of $`E_{52}=0.53`$, $`ϵ_e=0.57`$, and $`ϵ_B=0.0082`$ from Granot et al. are used in the right panel. The right panel demonstrates the effect of altering the total energy and the distribution of energy to the electrons and the magnetic field. The smaller values of $`E_{52}`$ and $`ϵ_B`$ shift the contours to the upper left while the larger $`ϵ_e`$ steepens the breaks in the contours. The increased shading corresponds to a decreasing detection probability. However for nearly all values of $`n_1`$ and $`\gamma _0`$ shown the predicted OT could have been detected by the upgraded LOTIS system. Wijers and Galama find a circumburster medium density of $`n_1=0.030`$ which predicts an $`m=9.09.5`$ optical flash nearly independent of the initial Lorentz factor. Granot et al. find a considerably higher vlaue of $`n_1=5.3`$, which predicts an $`m=8.712.4`$ OT which is very dependent on the initial Lorentz factor. The LOTIS upper limits mildly favor the GRB blast wave values determined by Granot et al. since dim OTs are predicted over a larger range of initial Lorentz factors.
no-problem/9912/astro-ph9912158.html
ar5iv
text
# Chromospheric activity of the double-lined spectroscopic binary BF Lyn ## 1. Introduction BF Lyn (HD 80715) is a double-lined spectroscopic binary with spectral types K2V/\[dK\] and both components have variable H$`\alpha `$ emission and strong Ca ii infrared triplet emission noted by Barden and Nations (1985). Strassmeier et al. (1989) observed strong Ca ii H & K and H$`ϵ`$ emissions from both components. The orbital period is 3.80406 days (Barden and Nations, 1985) and Strassmeier et al. (1989) from photometric observations found that BF Lyn is a synchronized binary with a circular revolution for a long time. Montes et al. (1995) also found strong emission in the Ca ii H & K lines from both components with very similar intensities and the H$`ϵ`$ line in emission too. In this paper we present simultaneous spectroscopic observations of H$`\alpha `$, H$`\beta `$, H$`ϵ`$, Ca ii H & K, and Ca ii IRT lines of this chromospherically active binary. ## 2. Observations Spectroscopic observations in several optical chromospheric activity indicators of BF Lyn and some inactive stars of similar spectral type and luminosity class have been obtained during four observing runs. 1) Two runs were carried out with the 2.56 m Nordic Optical Telescope (NOT) at the Observatorio del Roque de Los Muchachos (La Palma, Spain) in March 1996 and April 1998, using the SOFIN echelle spectrograph covering from 3632 Å to 10800 Å (resolution from $`\mathrm{\Delta }`$$`\lambda `$ 0.15 to 0.60 Å), with a 1152$`\times `$770 pixels EEV P88200 CCD as detector. 2) One observing run was obtained using the 2.1 m telescope at McDonald Observatory (USA) in January 1998 using the Sandiford Cassegrain Echelle Spectrograph covering from 6382 Å to 8700 Å (resolution from $`\mathrm{\Delta }`$$`\lambda `$ 0.13 to 0.20 Å), and with a 1200$`\times `$400 pixels Reticon CCD as detector. 3) The last run was carried out with the 2.5 m INT at the Observatorio del Roque de Los Muchachos (La Palma, Spain) in January 1999 using the Multi-Site Continuous Spectroscopy (MUSICOS), covering from 3950 Å to 9890 Å (resolution from $`\mathrm{\Delta }`$$`\lambda `$ 0.15 to 0.40 Å), with a 2148$`\times `$2148 pixels SITe1 CCD as detector. In the four runs we have obtained 11 spectra of BF Lyn in different orbital phases. Stellar parameters of BF Lyn have been adopted from Strassmeier et al. (1993), except for T<sub>conj</sub> taken from Barden & Nations (1985). The spectra have been extracted using the standard reduction procedure in the IRAF package (bias subtraction, flat-field division and optimal extraction of the spectra). The wavelength calibration was obtained by taking spectra for a Th-Ar lamp. Finally the spectra have been normalized by a polynomial fit to the observed continuum. The chromospheric contribution in the activity indicators has been determined using the spectral subtraction technique. ## 3. The H$`\alpha `$ line We have taken several spectra of BF Lyn in the H$`\alpha `$ line region in four different epochs and at different orbital phases. In all the spectra we can see the H$`\alpha `$ line in absorption from both components. The spectral subtraction reveals that both stars have an excess H$`\alpha `$ emission. The line profiles are displayed in Fig. 1, for each observation we plot the observed spectrum (left panel), and the subtracted one (right panel). The excess H$`\alpha `$ emission equivalent width (EW) is measured in the subtracted spectrum and corrected for the contribution of the components to the total continuum, in the case of BF Lyn we assume the same contribution for both stars. At some orbital phases, near to the conjunction, is not possible to separate the contribution of both components. The excess H$`\alpha `$ emission of BF Lyn shows variations with the orbital phase for both components, the hot star is the most active in H$`\alpha `$. In Fig 2 we have plotted for the McD 98 observing run the H$`\alpha `$ EW versus the orbital phase for the hot and cool components, respectively. The highest H$`\alpha `$ EW for the hot component has been reached at about 0.4 orbital phase and the lowest value is placed at about 0.9 orbital phase, whereas the cool component shows the highest H$`\alpha `$ EW at near 0.9 orbital phase and the lowest value between 0.2 and 0.4 orbital phases. The variations of H$`\alpha `$ EW for both components are anticorrelated and the maximum of active areas are found on the faced hemispheres. The same behaviour is also found in Ca ii IRT. The excess H$`\alpha `$ EW emission also shows seasonal variations, for instance, the values of MUSICOS 99 observing run are very different, specially for the cool component, from McD 98 values at very near orbital phase. ## 4. The H$`\beta `$ line Five spectra in the H$`\beta `$ region are available in three different epochs and at different orbital phases. In all the spectra the H$`\beta `$ line appears in absorption from both components, the application of the spectral subtraction technique reveals a clear excess H$`\beta `$ emission from both stars. The line profiles are displayed in Fig. 3. We have determined the excess H$`\beta `$ emission EW in the subtracted spectra, the ratio of excess emission EW, and the $`\frac{E_{H\alpha }}{E_{H\beta }}`$ relation: $$\frac{E_{H\alpha }}{E_{H\beta }}=\frac{EW_{sub}(H\alpha )}{EW_{sub}(H\beta )}0.24442.512^{(BR)}$$ (1) given by Hall & Ramsey (1992) as a diagnostic for discriminating between the presence of plages and prominences in the stellar surface. The low ratio that we have found in BF Lyn do not allow us to discriminate between both structures. ## 5. The Ca ii IRT lines The Ca ii infrared triplet (IRT) $`\lambda `$8498, $`\lambda `$8542, and $`\lambda `$8662 lines are other important chromospheric activity indicators. We have taken several spectra of BF Lyn in the Ca ii IRT lines region in three different epochs and at different orbital phases, the three Ca ii IRT lines are only included in MUSICOS 99 observing run. In all the spectra we can see the Ca ii IRT lines in emission from both components (Fig. 4). As in the case of H$`\alpha `$ line the Ca ii IRT emission shows variations with the orbital phase for both components. In Fig. 2 we have plotted, for the McD 98 observing run, the Ca ii $`\lambda `$8542 EW versus the orbital phase for the hot and cool component, respectively. The variations of Ca ii emission EW for both components are anticorrelated and they show the same behaviour found in the excess H$`\alpha `$ emission EW. ## 6. The Ca ii H $`\&`$ K and H$`ϵ`$ lines We have taken four spectra in the Ca ii H & K lines region during the NOT (96 & 98) observing runs and other spectrum of BF Lyn was taken in 1993 with the 2.2 m telescope at the German Spanish Astronomical Observatory (CAHA) (Montes et al., 1995). These spectra (Fig. 5) exhibit clear and strong Ca ii H & K and H$`ϵ`$ emission lines from both components, in the case of CAHA 93 run the H$`ϵ`$ emission line from the hot component is overlapped with the Ca ii H emission of the cool component. The excess Ca ii H $`\&`$ K and H$`ϵ`$ emissions change with the orbital phase during the NOT 98 run in the same way as the corresponding excess Ca ii $`\lambda `$8542 and H$`\alpha `$ emissions. The excess Ca ii H $`\&`$ K EW emissions also show seasonal variations, for instance, the values of CAHA 93 observing run are lower than NOT 96 & 98. ### Acknowledgments. This work has been supported by the Universidad Complutense de Madrid and the Spanish Dirección General de Enseñanza Superior e Investigación Científica (DGESIC) under grant PB97-0259. ## References Barden S.C., Nations H.L., 1985, in Cool Stars, Stellar Systems, and the Sun, ed. M. Zeilik and D.M. Gibson (Springer) p. 262. Hall J.C., Ramsey L.W., 1992, AJ, 104, 1942 Montes D., De Castro E., Fernández-Figueroa M.J., and Cornide M., 1995, A&AS, 114, 287 Strassmeier K.G., Hooten J.T., Hall D.S., and Fekel F.C., 1989, PASP, 101, 107 Strassmeier K.G., Hall D.S., Fekel F.C., Scheck M., 1993, A&AS, 100, 173
no-problem/9912/cond-mat9912297.html
ar5iv
text
# Unzipping DNA - towards the first step of replication ## Abstract The opening of the Y-fork - the first step of DNA replication - is shown to be a critical phenomenon under an external force at one of its ends. From the results of an equivalent delocalization in a non-hermitian quantum-mechanics problem we show the different scaling behavior of unzipping and melting. The resultant long-range critical features within the unzipped part of Y might play a role in the highly correlated biochemical functions during replication. DNA, the basic genetic material, is generally a very long, flexible, linear or circular molecule, with length varying from $`2\mu m`$ (5000 base pairs) for simple viruses to $`3.5\times 10^7\mu m`$ ($`10^{11}`$ base pairs) for more complex organisms. In spite of the wide diversity in the information content and the functionalities of organisms, the general rules for replication have an astounding universality in the sense of independence of the system. A double helical DNA can be made to melt (i.e. the two strands can be separated) in vitro (i.e. in the lab) by changes in pH, solvent conditions and/or temperature (“thermal melting”), but they are found to be extremely stable in the cell. As per the standard dogma of molecular biology, every step of a biochemical process is mediated by an enzyme (encoded in the DNA), and the process “that accompanies DNA replication requires enzymes specialized for this function: DNA helicases to disrupt the base pairs, topoisomerases to provide a swivel to unwind the strands, and single-strand binding proteins (SSB’s) to coat and stabilize the unwound strands. Melting of the duplex at a replication origin for initiation, and at the fork during elongation, requires an expenditure of energy, an investment justified by the functional and evolutionary benefits of maintaining the genome as a duplex DNA.” Many biochemical details of the replication process are known, organism by organism. The proposal of a Y-shaped structure for a linear molecule as the starting point of replication seems to be corroborated by experiments, but as yet a proper analytical understanding is lacking. Very recently, with the advent of various physical techniques, attempts have been made to study the process from a physical, rather than a biochemical, point of view. Special attention has been given to the measurements of forces to unzip a double stranded DNA (dsDNA) molecule. Several studies have been made to understand the effect of forces in absence of any enzymes, while, in other studies, enzymatic activities involving expenditure of energy and displacement have been interpreted in terms of effective forces. Similar to “transcription against applied force” of Ref. , experiments combining the mechanical opening of DNA with the enzymatic replication or transcription have been proposed, and such an experiment would be highly significant. The results of Ref. distinguish the cases of conventional thermal melting and the unzipping which has been called directional melting. Our analytical approach shows the differences between these two cases. Some of the quantitative questions are the following: (1) Is there a critical force to open up a double-stranded chain in thermal equilibrium, acting say at one end only? (2) Is the nature of the transition different from the thermal melting of the bound pair and is it reflected in the opened region? Once these equilibrium questions are understood, we may ask (3) how the pair opens up in time after the force is applied - the question of dynamics of unzipping. In order to focus on the effect of the pulling force (see Fig. 1) on the bound strands, we take the viewpoint of a minimal model that transcends microscopic details but on which further details can be added for a realistic situation. Our approach differs from the previous studies in the emphasis on the opening of the fork. We treat the DNA as consisting of two flexible interacting elastic strings. Winding is ignored. In a sense, the effect of topoisomerases is built into the model by maintaining the chains in the native state. Such a model has been found to be useful for many properties of DNA. For quantitative results we consider a bound situation as obtained by a binding square-well potential. Our focus is on the simpler aspects of the theory, and here we answer the first two questions posed above. Our model is this: Two gaussian polymer chains in $`d=3`$ dimensions, with $`\tau `$ denoting the position of a monomer along the contour of a chain, are tied at one end and pulled by a stretching force $`𝐠=g\widehat{𝐞}_g`$ at the other end (see Fig. 1), $`\widehat{𝐞}_g`$ being the unit vector in the direction of the force. The energy from the force is proportional to the separation $`𝐫(N)`$ of the end points at $`\tau =N`$, i.e. $$𝐠𝐫(N)=_0^N𝑑\tau 𝐠\frac{𝐫(\tau )}{\tau },$$ (1) since $`𝐫(0)=\mathrm{𝟎}`$. Assuming identical interaction for all base pairs ( or for a DNA with identical base pairs) the Hamiltonian can be written in the relative coordinate $`{\displaystyle \frac{H}{k_BT}}`$ $`=`$ $`{\displaystyle _0^N}𝑑\tau \left[{\displaystyle \frac{1}{2}}\left({\displaystyle \frac{𝐫(\tau )}{\tau }}\right)^^2𝐠{\displaystyle \frac{𝐫(\tau )}{\tau }}+V(𝐫(\tau ))\right],`$ (2) where $`𝐫(\tau )`$ is the $`d`$-dimensional relative coordinate of the two chains at the contour length $`\tau `$. Base pairings require that the monomers on the two strands interact only if they are at the same contour length $`\tau `$. The potential energy is therefore given by the integral of the potential $`V(𝐫(\tau ))`$ over $`\tau `$. $`V(𝐫)`$ is a short-range potential and its detailed form is not important. One choice, as generally used for renormalization group approach, is to take a contact potential $`V(𝐫)=v_0\delta _\mathrm{\Lambda }(𝐫)`$ where, in Fourier space, $`\delta _\mathrm{\Lambda }(𝐪)=1`$ for $`𝐪\mathrm{\Lambda }`$, $`\mathrm{\Lambda }`$ being a cut-off reminiscent of the underlying microscopic structure. For the equivalent quantum mechanical calculation, we choose a square-well potential. In thermal equilibrium, the properties are obtained from the free energy $`F=k_BT\mathrm{ln}Z`$ where the partition function $`Z=𝒟\mathrm{exp}(H/k_BT)`$ sums over all the configurations of the chains. The Hamiltonian written in the above form can be thought of as a directed polymer in $`d+1`$ dimensions about which many results are known for $`𝐠=0`$. If we treat $`\tau `$ as a time like co-ordinate, then the same Hamiltonian represents, in the path integral formulation, a quantum particle in imaginary time. This quantum particle with $`𝐠0`$ then corresponds to the imaginary vector potential problem much discussed in recent times. We make use of both the pictures in this paper and follow the formulation of Ref. closely. For $`d=1`$, the quantum problem with $`V(x)=v_0\delta (x)`$, is exactly solvable, and is done in Ref. as a single impurity problem. It was shown that there is a critical $`g_c`$ below which the force does not affect the bound-state energy, i.e. the quantum particle remains localized near the potential well, while for $`g>g_c`$ the particle delocalizes. In the polymer picture, this means that in low dimensions, a force beyond a critical strength separates the two strands. It should be pointed out here that the force is applied at one end only, and but it is not a boundary or edge effect mainly because of the connectivity of the polymer chain as expressed by Eq. 1. Details of the phase transition behavior of the Hamiltonian of Eq. 2 for $`g=0`$ are known from exact renormalization group (RG) calculations for a $`\delta `$-function potential. With $`g=0`$, there are two fixed points (fp) at (i)$`u^{}=0`$, and (ii) $`u^{}=ϵ`$, with $`ϵ=2d`$ and $`u=vL^ϵ`$ as the dimensionless running potential strength parameter, $`L`$ being an arbitrary length-scale. For $`d<2`$, the first one is unstable and hence the critical point while for $`d>2(ϵ<0)`$, the second one is the unstable one. The unstable fp ($`u_\mathrm{u}^{}`$) represents the melting or unbinding of the two chains. It also follows that other details of $`V(𝐫)`$ are irrelevant in the RG sense, i.e., they vanish in the large length-scale limit, explaining the universality of the problem. The important length-scales for this critical point, from the bound state side, comes from the typical size of the denatured bubbles of length $`\tau _\mathrm{m}`$ along the chain and $`\xi _\mathrm{m}`$ in the spatial extent (Fig. 1b). Close to the critical point these length scales diverge as $`\xi _\mathrm{m}\mathrm{\Delta }u^{\nu _\mathrm{m}}`$ and $`\tau _\mathrm{m}\xi _\mathrm{m}^\zeta `$, where $`\mathrm{\Delta }u`$ is the deviation from the critical point, where $`\nu _\mathrm{m}`$ and $`\zeta `$ are the two important exponents, $$\nu _\mathrm{m}=1/d2\mathrm{and}\zeta =2.$$ (3) It is this $`\zeta `$ (the dynamic exponent of the quantum problem) that will distinguish the new phenomena we are trying to understand. The mapping to the imaginary time quantum mechanics allows us to use the methods of quantum mechanics. As mentioned, we choose a square-well potential $`V(𝐫)=V_0`$, for $`r<r_0`$, and $`0`$, otherwise. The quantum Hamiltonian is then given by $$H_q(𝐠)=\frac{1}{2}(𝐩+i𝐠)^2+V(𝐫),$$ (4) in units of $`\mathrm{}=1`$ and mass $`m=1`$, with $`𝐩`$ as momentum. For simplicity, the well is chosen to be just deep enough to have only one bound state (in the quantum mechanical picture, with $`𝐠=0`$) with energy $`E_0<0`$. The non-hermitian Hamiltonian can be connected to the hermitian Hamiltonian at $`g=0`$ by $$U^1H_q(𝐠)U=H_q(𝐠=0),\mathrm{where}U=\mathrm{exp}(𝐠𝐫).$$ (5) The wave-functions are also related by this $`U`$-transformation so that if the transformed bound (i.e. localized) state wave-function remains normalizable, the bound state energy will not change. We refer to Ref. for details of the argument. The continuum part of the spectrum will have the minimum energy $`E=g^2/2`$ (the state with wave-vector $`𝐤=0`$). For the localized state at $`g=0`$, the wave-function for $`r>r_0`$ is $`\psi _0(𝐫)\mathrm{exp}(\kappa r)`$ where $`\kappa =2\sqrt{E_0}`$. The right eigenvector for $`H_q(g)`$ is then $`\psi _R(𝐫)\mathrm{exp}(𝐠𝐫\kappa r)`$, obtained via the $`U`$-transformation. This remains normalizable if $`g<\kappa `$, so that the binding energy remains the same as the $`g=0`$ value until $`g=g_c\kappa `$. The generic form of the spectrum is shown in Fig. 2a. This indicates a delocalization transition by tuning $`g`$ \- the unzipping or the directional melting of DNA at $`g=g_c`$. The phase diagram is shown schematically in Fig. 2b. There is a gap in the spectrum (Fig. 2a) for $`g<g_c`$ and the gap vanishes continuously as $`g^2\kappa ^2gg_c`$ as $`gg_c`$. Since the time in the quantum version corresponds to the chain length, the characteristic chain length for the delocalization transition is therefore $$\tau _{\mathrm{dm}}(g)g_cg^1.$$ (6) The spatial length-scale of the localized state is determined by the width of the wave-function and, for $`g=0`$, it is set by $`\kappa ^1`$. For $`g0`$, the right wave-function,$`\psi _R`$, has a different length scale and this length-scale diverges as the wave-function becomes non-normalizable. The width of $`\psi _R`$ gives this scale as $$\xi _{\mathrm{dm}}(g)g_cg^{\nu _{\mathrm{dm}}}\mathrm{with}\nu _{\mathrm{dm}}=1,$$ (7) for $`gg_c`$. We find $`\tau _{\mathrm{dm}}\xi _{\mathrm{dm}}`$ and therefore $$\zeta =1.$$ (8) The significance of $`\tau _{\mathrm{dm}}`$ can be understood if we study the separation of the two chains, i.e., $`𝐫_\tau `$, at a distance $`\tau `$ along the chain below the pulled end. This can be evaluated by using the standard rules of quantum mechanics. For infinitely long chains in the sub-critical region, only the bound and the first excited states are sufficient for the computation. One finds, along the pulled direction, $$r_\tau \mathrm{exp}[\tau /\tau _{\mathrm{dm}}(g)],$$ (9) where $`\tau _{\mathrm{dm}}(g)`$ is given by Eq. 6. In other words, $`\tau _{\mathrm{dm}}(g)`$ and $`\xi _{\mathrm{dm}}(g)`$ describe the unzipped part of the two chains near the pulled end (Fig. 1c). These length scales diverge with $`\zeta =1`$ as the critical force is reached from below. (One can also define similar length-scales for $`gg_c+`$ where the lengths would describe the bound regions.) The exponential fall-off, Eq. 9, of the separation from the pulled end immediately gives the picture of the replication Y-fork as shown in Fig. 1c. Let us study the behavior of the free energy. Since the partition function for the Hamiltonian of Eq. 2 obeys a diffusion-like equation, the free energy $`=F/k_BTg^2\tau /2`$ satisfies the equation $$\frac{}{\tau }=\frac{1}{2}^2\frac{1}{2}()^2𝐠v_0\delta _\mathrm{\Lambda }(𝐫).$$ (10) The left hand side represents the free energy per unit length of the chains. Under a scale transformation $`xbx`$, and $`\tau b^\zeta \tau `$, the total free energy remains invariant so that the above equation takes the form $`{\displaystyle \frac{}{\tau }}`$ $`=`$ $`{\displaystyle \frac{1}{2}}b^{\zeta 2}^2{\displaystyle \frac{1}{2}}b^{\zeta 2}()^2`$ (12) $`b^{\zeta 1}𝐠b^{\zeta d}v_0\delta _\mathrm{\Lambda }(𝐫).`$ For $`g=0`$, Eq. 12 tells us that $`\zeta =2`$ and $`d=2`$ are special for melting transition as we see in Eq. 3. For the choice $`\zeta =1`$, the $`g`$-dependent term dominates and all other terms become irrelevant for large length-scale $`b`$. It is this feature that shows up in the Y-fork of the unzipped chain. The robustness of Eqs. 6-9 also follows from this. With the dependence on the potential strength entering only through $`g_c`$, these are valid along the hatched line of Fig. 2b, and could be oblivious to the details of the nature of the melting transition. In our simple model, the melting point appears as a multi-critical point in the phase diagram of Fig. 2b. Now we see the striking difference between the directional melting and thermal melting (Fig. 1). In the thermal case the bubbles will have anisotropic shape with the spatial extent scaling as the square-root of the length along the chain. This is the characteristic of the fixed point $`u_\mathrm{u}^{}`$, while the pulled case is described by a different scaling. The exponential profile of the Y-fork and its scaling are the two important characteristic features which should be experimentally verifiable. We have considered only the equilibrium situation. The dynamics of this process of directional melting or unzipping is important. Similarly the condition of identical interaction may not be realistic for real DNA. This feature can be cured by taking the interaction $`v_0`$ to be random with a specific distribution. Such a random case shows a different type of melting behavior. The mapping to a quantum problem is also then lost. Self-avoidance and other topological constraints can also be added to this model, though at the cost of the simplicity of the model. For example, self-avoidance can be introduced in Eq. 2 by adding a term $`\frac{1}{2}𝑑\tau 𝑑\tau ^{}v_s\delta (𝐫(\tau )𝐫(\tau ^{}))`$, with $`v_s>0`$. The generalization to a two-chain problem is straight-forward. Such a term can be “unsquared” by introducing an annealed gaussian-random potential $`V_\mathrm{I}(𝐫)`$ with zero mean and variance $`V_\mathrm{I}(𝐫)V_\mathrm{I}(𝐫^{})`$ $`=v_s\delta (𝐫𝐫^{})`$ so that the equivalent quantum hamiltonian is $`_q=H_q+iV_\mathrm{I}(𝐫)`$, where $`H_q`$ is given by Eq. 4. This involves both an imaginary vector potential and an imaginary, random, scalar potential. The polymer problem is recovered from the $`V_\mathrm{I}`$-averaged propagator of the quantum particle. Such a general non-hermitian hamiltonian is little understood at present even for the pure case, let alone the random one. We wish to come back to these issues in future. Let us summarize our results and the emerging picture. There is a critical strength of the force required to unzip a double stranded DNA, and this directional melting is a critical phenomenon. Thermal or other fluctuations can open up regions along the length of the chain (bubbles with $`\zeta =2`$) but the unzipped part is characterized by a different scaling ($`\zeta =1`$), as shown in Fig. 1. In other words, the Y-fork created by the force represents a correlated region which is easily distinguishable from a thermal bubble. Living organisms probably work at a slightly sub-critical regime with $`g<g_c`$ to take advantage of and exploit the distinct correlations for the enzymatic actions within the Y-fork. This can be achieved by coupling the biochemical reactions (like polymerization, unwinding etc) to the unzipping phenomenon, analogous, in spirit, to reaction-diffusion systems. The near-critical features of unzipping or directional melting then can lead to a coherent phenomenon we see as replication - a process requiring highly cooperative functioning of different enzymes in space and time. This way of viewing the correlated biochemical events has not been studied so far. As a first step, we therefore suggest that high precision measurements be done to get the profile along the unzipped region of DNA (or simpler double-stranded polymers) under a pulling force, in vitro or in vivo. I thank ICTP for warm hospitality, where a major part of this work was done. Acknowledgments will probably belittle the influence of a discussion with Mathula Thangarajh that shaped the final form of this work.
no-problem/9912/hep-ph9912422.html
ar5iv
text
# 1 Introduction ## 1 Introduction At present time black holes (BH) can be created only by a gravitational collapse of compact objects with mass more than about three Solar mass . However at the early stage of evolution of the Universe there where no limits on the mass of BH formed by several mechanisms. The simplest one is a collapse of strongly inhomogeneous regions just after the end of inflation (see e.g. ). Another possible source of BH could be a collapse of cosmic strings that are produced in early phase transitions with symmetry breaking. The collisions of the bubble walls created at phase transitions of the first order can lead to a primordial black hole (PBH) formation. We discuss here new mechanism of PBH production in the collision of two vacuum bubbles. The known opinion of the BH absence in such processes is based on strict conservation of the original O(2,1) symmetry. Whereas there are ways to break it . Firstly, the radiation of scalar waves indicates the entropy increasing and hence the permanent breaking of the symmetry during the bubble collision. Secondly, the vacuum decay due to thermal fluctuation does not possess this symmetry from the beginning. The simplest example of a theory with bubble creation is a scalar field theory with two non degenerated vacuum states. Being stable at a classical level, the false vacuum state decays due to quantum effects, leading to a nucleation of the bubbles of true vacuum and their subsequent expansion. The potential energy of the false vacuum is converted into a kinetic energy of the bubble walls thus making them highly relativistic in a short time. The bubble expands till it collides with another one. As it was shown in a black hole may be created in the collision of several bubbles. Our investigations show that BH can be created as well with a probability of order unity in the collisions of only two bubbles. It initiates the enormous production of BH that leads to essential cosmological consequences discussed below. In Section 2 the evolution of the field configuration in the collisions of bubbles is discussed. The BH mass distribution is obtained in Section 3. In Section 4 cosmological consequences of the BH production in bubble collisions at the end of inflation are considered. ## 2 Evolution of field configuration in collisions of vacuum bubbles Consider a theory where a probability of false vacuum decay equals $`\mathrm{\Gamma }`$ and difference of energy density between the false and true vacuum outside equals $`\rho _V`$. Initially bubbles are produced at rest however walls of the bubbles quickly increase their velocity up to the speed of light $`v=c=1`$ because a conversion of the false vacuum energy into its kinetic ones is energetically favorable. Let us discuss dynamics of collision of two true vacuum bubbles that have been nucleated in points $`(𝐫_1,t_1),(𝐫_2,t_2)`$ and which are expanding into false vacuum. Following papers let us assume for simplicity that the horizon size is much greater than the distance between the bubbles. Just after collision mutual penetration of the walls up to the distance comparable with its width is accompanied by a significant potential energy increase . Then the walls reflect and accelerate backwards. The space between them is filled by the field in the false vacuum state converting the kinetic energy of the wall back to the energy of the false vacuum state and slowdown the velocity of the walls. Meanwhile the outer area of the false vacuum is absorbed by the outer wall, which expands and accelerates outwards. Evidently, there is an instant when the central region of the false vacuum is separated. Let us note this false vacuum bag (FVB) does not possess spherical symmetry at the moment of its separation from outer walls but wall tension restores the symmetry during the first oscillation of FVB. As it was shown in , the further evolution of FVB consists of several stages: 1) FVB grows up to the definite size $`D_M`$ until the kinetic energy of its wall becomes zero; 2) After this moment the false vacuum bag begins to shrink up to a minimal size $`D^{}`$; 3) Secondary oscillation of the false vacuum bag occurs. The process of periodical expansions and contractions leads to energy losses of FVB in the form of quanta of scalar field. It has been shown in the that only several oscillations take place. On the other hand, important note is that the secondary oscillations might occur only if the minimal size of the FVB would be larger than its gravitational radius, $`D^{}>r_g`$. The opposite case ($`D^{}<r_g`$ ) leads to the BH creation with the mass about the mass of the FVB. As we will show later the probability of BH formation is almost unity in a wide range of parameters of theories with first order phase transitions. ## 3 Gravitational collapse of FVB and BH creation Consider in more details the conditions of converting FVB into BH. The mass $`M`$ of FVB can be calculated in a framework of a specific theory and can be estimated in a coordinate system $`K^{}`$ where the colliding bubbles are nucleated simultaneously. The radius of each bubble $`b^{}`$ in this system equals to half of their initial coordinate distance at first moment of collision. Apparently the maximum size $`D_M`$ of the FVB is of the same order as the size of the bubble, since this is the only parameter of necessary dimension on such a scale: $`D_M=2b^{}C`$. The parameter $`C1`$ is obtained by numerical calculations in the framework of each theory, but its exact numerical value does not affect significantly conclusions. One can find the mass of FVB that arises at the collision of two bubbles of radius: $$M=\frac{4\pi }{3}\left(Cb^{}\right)^3\rho _V$$ (1) This mass is contained in the shrinking area of false vacuum. Suppose for estimations that the minimal size of FVB is of order wall width $`\mathrm{\Delta }`$. The BH is created if minimal size of FVB is smaller than its gravitational radius. It means that at least at the condition $$\mathrm{\Delta }<r_g=2GM$$ (2) the FVB can be converted into BH (where G is the gravitational constant). As an example consider a simple model with Lagrangian $$L=\frac{1}{2}\left(_\mu \mathrm{\Phi }\right)^2\frac{\lambda }{8}\left(\mathrm{\Phi }^2\mathrm{\Phi }_0^2\right)^2ϵ\mathrm{\Phi }_0^3\left(\mathrm{\Phi }+\mathrm{\Phi }_0\right).$$ (3) In the thin wall approximation the width of the bubble wall can be expressed as $`\mathrm{\Delta }=2\left(\sqrt{\lambda }\mathrm{\Phi }_0\right)^1`$. Using (2) one can easily derive that at least FVB with mass $$M>\frac{1}{\sqrt{\lambda }\mathrm{\Phi }_0G}$$ (4) should be converted into BH of mass M. The last condition is valid only in case when FVB is completely contained in the cosmological horizon, namely $`M_H>1/\sqrt{\lambda }\mathrm{\Phi }_0G`$ where the mass of the cosmological horizon at the moment of phase transition is given by $`M_Hm_{pl}^3/\mathrm{\Phi }_0^2`$. Thus for the potential (3) at the condition $`\lambda >(\mathrm{\Phi }_0/m_pl)^2`$ the BH is formed. This condition is valid for any realistic set of parameters of theory. The mass and velocity distribution of FVBs, supposing its mass is large enough to satisfy the inequality (2), has been found in . This distribution can be written in the terms of dimensionless mass $`\mu \left(\frac{\pi }{3}\mathrm{\Gamma }\right)^{1/4}\left(\frac{M}{C\rho _v}\right)^{1/3}`$: $$\begin{array}{c}\frac{dP}{\mathrm{\Gamma }^{3/4}Vdvd\mu }=64\pi \left(\frac{\pi }{3}\right)^{1/4}\mu ^3e^{\mu ^4}\gamma ^3J(\mu ,v),\\ J(\mu ,v)=_\tau ^{\mathrm{}}𝑑\tau e^{\tau ^4},\tau _{}=\mu \left[1+\gamma ^2\left(1+v\right)\right].\end{array}$$ (5) The numerical integration of (5) revealed that the distribution is rather narrow. For example the number of BH with mass 30 times greater than the average one is suppressed by factor $`10^5`$. Average value of the non dimensional mass is equal to $`\mu =0.32`$. It allows to relate the average mass of BH and volume containing the BH at the moment of the phase transition: $$M_{BH}=\frac{C}{4}\mu ^3\rho _vV_{BH}0.012\rho _vV_{BH}.$$ (6) ## 4 First order phase transitions in the early Universe Inflation models ended by a first order phase transition hold a dignified position in the modern cosmology of early Universe (see for example ). The interest to these models is due to, that such models are able to generate the observed large-scale voids as remnants of the primordial bubbles for which the characteristic wavelengths are several tens of Mpc. . A detailed analysis of a first order phase transition in the context of extended inflation can be found in . Hereafter we will be interested only in a final stage of inflation when the phase transition is completed. Remind that a first order phase transition is considered as completed immediately after establishing of true vacuum percolation regime. Such regime is established approximately when at least one bubble per unit Hubble volume is nucleated. Accurate computation shows that first order phase transition is successful if the following condition is valid: $$Q\frac{4\pi }{9}\left(\frac{\mathrm{\Gamma }}{H^4}\right)_{t_{end}}=1.$$ (7) Here $`\mathrm{\Gamma }`$ is the bubble nucleation rate. In the framework of first order inflation models the filling of all space by true vacuum takes place due to bubble collisions, nucleated at the final moment of exponential expansion. The collisions between such bubbles occur when they have comoving spatial dimension less or equal to the effective Hubble horizon $`H_{end}^1`$ at the transition epoch. If we take $`H_0=100hKm/\mathrm{sec}/Mpc`$ in $`\mathrm{\Omega }=1`$ Universe the comoving size of these bubbles is approximately $`10^{21}h^1Mpc`$. In the standard approach it believes that such bubbles are rapidly thermalized without leaving a trace in the distribution of matter and radiation. However, in the previous section it has been shown that for any realistic parameters of theory, the collision between only two bubble leads to BH creation with the probability closely to 100% . The mass of this BH is given by (see (6)) $$M_{BH}=\gamma _1M_{bub}$$ (8) where $`\gamma _110^2`$ and $`M_{bub}`$ is the mass that could be contained in the bubble volume at the epoch of collision in the condition of a full thermalization of bubbles. The discovered mechanism leads to a new direct possibility of PBH creation at the epoch of reheating in first order inflation models. In standard picture PBHs are formed in the early Universe if density perturbations are sufficiently large, and the probability of PBHs formation from small post- inflation initial perturbations is suppressed exponentially. Completely different situation takes place at final epoch of first order inflation stage; namely collision between bubbles of Hubble size in percolation regime leads to PBHs formation with masses $$M_0=\gamma _1M_{end}^{hor}=\frac{\gamma _1}{2}\frac{m_{pl}^2}{H_{end}},$$ (9) where $`M_{end}^{hor}`$ is the mass of Hubble horizon at the end of inflation. According to (6) the initial mass fraction of this PBHs is given by $`\beta _0\gamma _1/e610^3`$. For example, for typical value of $`H_{end}410^6m_{pl}`$ the initial mass fraction $`\beta `$ is contained in PBHs with mass $`M_01g`$. In general the Hawking evaporation of mini BHs could give rise to a variety possible end states. It is generally assumed, that evaporation proceeds until the PBH vanishes completely , but there are various arguments against this proposal (see e.g. ). If one supposes that BH evaporation leaves a stable relic, then it is naturally to assume that it has a mass of order $`m_{rel}=km_{pl}`$, where $`k1÷10^2`$. We can investigate the consequences of PBH forming at the percolation epoch after first order inflation, supposing that the stable relic is a result of its evaporation. As it follows from our above consideration the PBHs are preferentially formed with a typical mass $`M_0`$ at a single time $`t_1`$. Hence the total density $`\rho `$ at this time is $$\rho (t_1)=\rho _\gamma (t_1)+\rho _{PBH}(t_1)=\frac{3(1\beta _0)}{32\pi t_1^2}m_{pl}^2+\frac{3\beta _0}{32\pi t_1^2}m_{pl}^2$$ (10) The evaporation time scale can be written in the following form $$\tau _{BH}=\frac{M_0^3}{g_{}m_{pl}^4}$$ (11) where $`g_{}`$ is the number of effective massless degrees of freedom. Let us derive the density of PBH relics. There are two distinct possibilities to consider. The Universe is still radiation dominated at $`\tau _{BH}`$. This situation will be hold if the following condition is valid $`\rho _{BH}(\tau _{BH})<\rho _\gamma (\tau _{BH})`$. It is possible to rewrite this condition in terms of Hubble constant at the end of inflation $$\frac{H_{end}}{m_{pl}}>\beta _0^{5/2}g_{}^{1/2}10^6$$ (12) Taking the present radiation density fraction of the Universe to be $`\mathrm{\Omega }_{\gamma _0}=2.510^5h^2`$ ($`h`$ being the Hubble constant in the units of $`100kms^1Mpc^1`$), and using the standard values for the present time and time when the density of matter and radiation become equal, we find the contemporary densities fraction of relics $$\mathrm{\Omega }_{rel}10^{26}h^2k\left(\frac{H_{end}}{m_{pl}}\right)^{3/2}$$ (13) It is easily to see that relics overclose the Universe ($`\mathrm{\Omega }_{rel}>>1`$) for any reasonable $`k`$ and $`H_{end}>10^6m_{pl}`$. The second case takes place if the Universe becomes PBHs dominated at period $`t_1<t_2<\tau _{BH}`$. This situation is realized under the condition $`\rho _{BH}(t_2)<\rho _\gamma (t_2)`$, which can be rewritten in the form $$\frac{H_{end}}{m_{pl}}<10^6.$$ (14) The present day relics density fraction takes the form $$\mathrm{\Omega }_{rel}10^{28}h^2k\left(\frac{H_{end}}{m_{pl}}\right)^{3/2}$$ (15) Thus the Universe is not overclosed by relics only if the following condition is valid $$\frac{H_{end}}{m_{pl}}210^{19}h^{4/3}k^{2/3}.$$ (16) This condition implies that the masses of PBHs created at the end of inflation have to be larger then $$M_010^{11}gh^{4/3}k^{2/3}.$$ (17) From the other hand there are a number of well–known cosmological and astrophysical limits which prohibit the creation of PBHs in the mass range (17) with initial fraction of mass density closed to $`\beta _010^2`$. So one have to conclude that the effect of the false vacuum bag mechanism of PBH formation makes impossible the coexistence of stable remnants of PBH evaporation with the first order phase transitions at the end of inflation. Acknowledgements. This work was partially performed in the framework of Section ”Cosmoparticle physics” of Russian State Scientific Technological Program ”Astronomy. Fundamental Space Research”, International project Astrodamus, Cosmion-ETHZ and Eurocos-AMS.
no-problem/9912/hep-ex9912015.html
ar5iv
text
# The Lower Mass Limit on the Lightest Supersymmetric Particle, using ALEPH data up to 188.6 GeV ## 1 Introduction The ALEPH detector at LEP collected close to 175 $`\mathrm{pb}^1`$ data at a centre-of-mass energy of 188.6 GeV. Searches in this data for the decays of Supersymmetric particles have shown no evidence for supersymmetry. This data can be interpreted as excluded regions in the MSSM parameter space. All conventions and notations are consistent with reference , where the searches for charginos and neutralinos using data taken at centre-of-mass energies near 183 GeV are reported. These exclusions extend the previous results using the ALEPH data given in references . The results of the searches are interpreted as exclusions in the MSSM parameter space assuming that R-parity is conserved and the neutralino is the LSP. Sneutrino masses of less than 42 $`\mathrm{GeV}/\mathrm{c}^2`$ are already excluded by limits on $`\mathrm{\Gamma }_Z`$ . ## 2 Search for gauginos The search for gauginos was performed first under the assumption of high slepton mass. The visible topologies and energy in gaugino pair production depend on the decay chain of the gaugino to LSP and on the mass difference between the gaugino and the LSP. Various topological searches are used, described in reference . The selections were reoptimised to give the best expected limit (in the absence of a signal) for the higher energy and luminosity. In total 25 chargino and 41 neutralino candidates were observed in at least one of the selections with the expected backgrounds from standard model processes being 23.0 and 44.3 events respectively. For the case of high sfermion mass the chargino will predominately decay via a W to the neutralino and the next to lightest neutralino ($`\chi ^{}`$) will decay via a Z. For the topologies sensitive to these cases the number of data events were 10 (4) for chargino (neutralino) selections with an expectation of 6.8 (5.3) events from standard model processes. The data sample is consistent with the standard model expectations and so limits on the production cross-sections of charginos and neutralinos are derived. Only in the case of the WW background to the acoplanar lepton search are the expected backgrounds corrected for in the limits. The limits for gaugino production close to the kinematic limit are shown in figures 1,2 for the case of heavy sfermions. These limits are shown as excluded contours in the $`\mu `$ v $`M_2`$ parameter space for $`\mathrm{tan}\beta =\sqrt{2}`$ in 3. Using these exclusions all points in the MSSM parameter space with neutralino masses of less than 32.3 $`\mathrm{GeV}/\mathrm{c}^2`$ are excluded for any $`\mathrm{tan}\beta `$ and $`m_0=500\mathrm{GeV}/\mathrm{c}^2`$. ## 3 Search with low slepton mass The case of low slepton mass is also considered. The chargino and neutralino pair production cross-sections have a dependence on the value of $`m_0`$ due to the s-channel exchange of a $`(Z/\gamma )^{}`$ and t-channel interference terms for the exchange of a $`\stackrel{~}{\nu }_e`$ or $`\stackrel{~}{\mathrm{e}}`$. Also the branching ratio of leptonic decays are enhanced and invisible modes, for example $`\chi ^{}\nu \stackrel{~}{\nu }`$, become kinematically allowed. The search for direct slepton production does however allow additional exclusions and the LEP 1 limit on the non-standard model contributions to $`\mathrm{\Gamma }_Z`$ also excludes part of the parameter space. The combined exclusion for $`m_0=75\mathrm{GeV}/\mathrm{c}^2`$ is shown in figure 4. Using the combination of these searches all points in parameter space with $`M_\chi <32.3\mathrm{GeV}/\mathrm{c}^2`$ are excluded for all $`m_0`$. ## 4 Conclusion The overall limit limit on the LSP mass in the MSSM for the entire $`\mu `$, $`M_2`$, $`\mathrm{tan}\beta `$ and $`m_0`$ parameter space is $`M_{LSP}>32.3\mathrm{GeV}/\mathrm{c}^2`$. This limit is set at the point of large $`m_0`$, $`\mathrm{tan}\beta =1`$, $`M_2=54.5\mathrm{GeV}/\mathrm{c}^2`$ and $`\mu =68.3\mathrm{GeV}/\mathrm{c}^2`$, here the lightest Higgs mass is above 96 $`\mathrm{GeV}/\mathrm{c}^2`$ for large $`m_A`$ and stop mixing. The sensitivity of the ALEPH Higgs boson searches at 189 GeV does not allow the exclusion of this point.
no-problem/9912/astro-ph9912040.html
ar5iv
text
# Tomography of the low excitation planetary nebula NGC 40 ## 1 Introduction NGC 40 (PNG120.0+09.8, Acker et al., 1992; Figure 1) is a very low excitation planetary nebula powered by a WC8 star presenting a mass-loss rate of the order of 10<sup>-6</sup> \- 10<sup>-8</sup> Myr<sup>-1</sup> with wind velocities between 1800 and 2370 km s<sup>-1</sup> (Cerruti-Sola and Perinotto, 1985; Bianchi, 1992). The nebula is described by Curtis (1918) as a truncated ring, from the end of which extend much fainter wisps; the brighter central portion is 38x35$`\mathrm{}`$ in PA=14<sup>0</sup>, while the total length along this axis is about 60″. Deep, narrow-band imagery by Balick et al. (1992) reveals the presence of an external network of knotty floccules and smooth filaments. Meaburn et al. (1996) identify two haloes around the barrel-shaped core, and a jet-like feature projecting from it. Detailed low resolution spectroscopy (Aller and Czyzak, 1979, 1983; Clegg et al., 1983) indicates point to point changes of the excitation and chemical abundances which are typical of planetary nebulae, without apparent contamination by the fast stellar wind. A kinematical analysis secured by Sabbadin and Hamzaoglu (1982) suggests the presence of peculiar motions in the central part of NGC 40, with the expansion velocity larger in \[OIII\] than in H$`\alpha `$ and \[NII\]. More recently, Meaburn et al. (1996) obtained long-slit H$`\alpha `$ and \[NII\] high resolution spectra of the bright core, of the two haloes and of the jet-like feature. These authors showed that the outer filamentary halo is practically inert and that the motion of the inner, diffuse, spherical halo mimics the one observed over the bright nebular core; moreover, the jet-like structure is kinematically associated with the receding end of the barrel shaped-core and does not present any of the characteristics expected of a true jet. In order to study in detail the nebular physical conditions and to apply an original procedure giving the spatial matter distribution along the cross-section covered by the spectroscopic slit, on December 1998 we obtained a number of echellograms at different position angles of a dozen of winter planetary nebulae (NGC 40, 650-1, 1501, 1535, 2022, 2371-2, 2392, 7354 and 7662, J 320 and 900, A 12 and M 1-7). In this paper we report the results derived for NGC 40. In Section 2 we present the observational material; in Section 3 we discuss the expansion velocity field; in Section 4, the electron temperature and the electron density are analysed and a new method to determine the radial distribution of the electron density in expanding nebulae is presented. The radial ionization structure of NGC 40 is given in Section 5 and the resulting chemical composition gradients in Section 6. Section 7 shows the H<sup>+</sup>, O<sup>++</sup> and N<sup>+</sup> tomographic maps derived at four position angles; a short discussion is contained in Section 8 and conclusions are drawn in Section 9. ## 2 Observational material Broad-band U, V and R imagery of NGC 40 was obtained with the Optical Imager Galileo (OIG) camera mounted on the Nasmyth A adapter interface (scale=5.36 $`\mathrm{}`$ mm<sup>-1</sup>) of the 3.5m Italian National Telescope (TNG, Roque de los Muchachos, La Palma, Canary Islands). The OIG camera was equipped with a mosaic of two thinned and back-illuminated EEV42-80 CCDs with 2048x4096 pixels each (pixel size=13.5 microns; pixel scale= 0.072 $`\mathrm{}`$ pix<sup>-1</sup>). Though these images were taken during the testing period of the instrument (when interference filters were unavailable), they reveal new morphological details of the nebula supporting our spectroscopic results and will be presented later-on (Section 5). Spatially resolved, long-slit spectra of the bright core of NGC 40 in the range $`\lambda `$$`\lambda `$4500-8000 Å (+flat fields+Th-Ar calibrations+ comparison star spectra) were obtained with the Echelle spectrograph (Sabbadin and Hamzaoglu, 1981, 1982) attached to the Cassegrain focus of the 182cm Asiago telescope, combined with a Thompson 1024x1024 pixels CCD. Four position angles were selected: 20<sup>0</sup> (apparent major axis), 110<sup>0</sup> (apparent minor axis), 65<sup>0</sup> and 155<sup>0</sup> (intermediate positions). In all cases the slit-width was 0.200 mm (2.5$`\mathrm{}`$ on the sky), corresponding to a spectral resolution of 13.5 km s<sup>-1</sup> (1.5 pixel). During the exposures the slit grazed the bright central star (we tried to avoid most of the continuum contamination; the faint stellar spectrum present in our echellograms was used as reference for the nebular centre and to correct the observations for seeing and guiding errors). All spectra were calibrated in wavelength and flux in the standard way using the IRAF data reduction software. The following nebular emissions were detected: H$`\beta `$, $`\lambda `$4959 Å and $`\lambda `$5007 Å of \[OIII\], $`\lambda `$5755 Å of \[NII\], $`\lambda `$5876 Å of HeI, $`\lambda `$6300 Å and $`\lambda `$6363 Å of \[OI\] (the first at the order edge; both partially blended with night-sky lines), $`\lambda `$6548 Å and $`\lambda `$6584 Å of \[NII\], H$`\alpha `$, $`\lambda `$6578 Å of CII, $`\lambda `$6717 Å and $`\lambda `$6731 Å of \[SII\], $`\lambda `$7135 Å of \[ArIII\], $`\lambda `$7320 Å and $`\lambda `$7330 Å of \[OII\] (at the extreme edge of the order; out of focus) and $`\lambda `$3726 Å and $`\lambda `$3729 Å of \[OII\] (second order). All line intensities (but not the second order doublet of \[OII\]) were de-reddened by fitting the observed H$`\alpha `$/H$`\beta `$ ratio to the one computed by Brocklehurst (1971) for Te=$`10^4`$ K and Ne=$`10^4`$ cm<sup>-3</sup>. We derive a logarithmic extinction at H$`\beta `$, c(H$`\beta `$)=$`0.60\pm 0.10`$, to be compared with the values of 0.28, 0.65 and 0.70 obtained by Kaler (1976), Aller and Czyzak (1979) and Clegg et al. (1983), respectively. The fair agreement (within 10%) between our integrated line intensities and those reported in the literature from low resolution spectra (see, for example, Clegg et al., 1983) induced us to adopt I(3726+3729)/I(H$`\alpha `$)= 1.48, as reported by these authors. A large variety of emission structures is present in NGC 40; for illustrative purposes, some examples are given in Figure 2. In these reproductions the faint stellar continuum was removed and the observed intensities enhanced (by the factor given in parenthesis) to make each line comparable with H$`\alpha `$. Our spectra at PA=110<sup>0</sup> (apparent minor axis, Fig. 2a) were centred 2$`\mathrm{}`$ South of the central star. All emissions present the classical bowed shape expected for expanding shells of different mean radii $`r`$ and thicknesses $`t`$, ranging from $`r=16^{\prime \prime }`$ for \[OIII\] and CII to $`r=19^{\prime \prime }`$ for \[NII\] and \[SII\] and from $`t=0.20r`$ for \[NII\] and \[SII\] to $`t=0.25r`$ for \[OIII\] and CII (as also suggested by the integral intensity distributions along the slit). At PA=20<sup>0</sup> (apparent major axis; slit centre 2$`\mathrm{}`$ West of the star; Fig. 2b) low excitation lines (such as the \[SII\] red doublet) show knotty structures and fail to close at either polar cap. High excitation emissions (for instance \[OIII\]) appear more homogeneous; they merge in a diffuse zone internal to the polar caps; the smooth intensity distribution and the large velocity spread suggest that \[OIII\] completely fills this part of nebula; moreover, the faintness (or absence) of an external low excitation counterpart indicates that in these directions the main body of NGC 40 is density bounded rather than ionization bounded. Spectra taken al PA=65<sup>0</sup> (Fig. 2c) were centred 2$`\mathrm{}`$ S-E of the central star; they appear quite regular, but for a deformation of the blue-shifted component in the E-NE quadrant. At PA=155<sup>0</sup> (centre of the slit 2$`\mathrm{}`$ S-W of the star; Fig. 2d) low excitation lines (such as the \[NII\] line at $`\lambda `$6584 Å) merge in the N-NW sector, in correspondence of a bright knot. The S-SE region is strongly depauperated of gas and only high excitation emissions (like the extremely faint CII line at $`\lambda `$6578 Å) present a close structure; as already seen for PA=20<sup>0</sup>, the weakness of external low excitation emissions suggests that in these directions the nebula is density bounded. ## 3 The expansion velocity field First of all we measured the peak separation at the centre of each nebular emission. Results are presented in Table 1, where ions are put in order of increasing ionization potential. We find a complete overturn of the Wilson law: in NGC 40 the higher is the ionization, the faster is the motion (thus confirming the suggestion by Sabbadin and Hamzaoglu, 1982, of a peculiar \[OIII\] expansion velocity). A further peculiarity is represented by \[OII\], which expands faster than the other low ionization species. This \[OII\] velocity-excess is related to the very low degree of excitation of the nebula: since the O<sup>+</sup> zone stops at 35.1 eV (while, for example, S<sup>+</sup> falls at 23.4 eV and N<sup>+</sup> at 29.6 eV), it extends well inwards, as also indicated by the \[OIII\] weakness. So, in NGC 40, \[OII\] must be considered a medium-high excitation species (at least for the dynamical point of view). Clearly, the expansion velocities reported in Table 1 are only mean values and do not describe the complex kinematics of the nebula. To better illustrate this, we have selected two position angles (110<sup>0</sup>=apparent minor axis; 20<sup>0</sup>=apparent major axis) and chosen \[NII\] for low excitation regions and \[OIII\] for the high excitation ones (”high” at least for NGC 40!). The regular ellipses shown by these ions at PA=110<sup>0</sup> (Figure 3, left panel) are typical of expanding shells; the absence of tilt indicates that the nebular cross-sections are either circular, or elliptical (in this second case we are aligned with one of the axes). In fact, the de-projection of the observed expansion velocities (assuming a simple, direct position-speed correlation) gives the ”true” expansion velocities presented in the right panel of Figure 3, i.e. a constant value for each ion. Similar trends are obtained also at PA=65<sup>0</sup> and in the N-NW sector of PA=155<sup>0</sup> . The \[NII\] and \[OIII\] expansion velocity fields observed at PA=20<sup>0</sup> (major axis) are given in Figure 4 (left panel). Here the situation is more complicated, mainly with respect to \[OIII\]. In fact, while \[NII\] presents sharp emissions along the slit, \[OIII\] describes an ellipse which is well defined in the main part of the nebula, but becomes smooth and broad to the north and south. The de-projection of the observed velocities (right panel of Figure 4) points out the acceleration suffered by the polar (less dense) nebular material. A similar behavior occurs at PA=155<sup>0</sup>, in the S-SE sector. For a better understanding of the peculiar kinematics of NGC 40, a detailed analysis of the nebular physical conditions can be useful. ## 4 Electron temperature (Te) and electron density (Ne) The only $`Te`$ diagnostic present in our spectra is the line intensity ratio 6584/5755 Å of \[NII\]. Unfortunately, the auroral emission is too weak for a detailed analysis; for the brightest parts of NGC 40 we obtain I(6584)/I(5755)=125$`\pm `$20, corresponding to $`Te`$=7900 $`\pm `$200 K (assuming Ne=10<sup>4</sup> cm<sup>-3</sup>). Thus, according to Aller and Czyzak (1979, 1983) and Clegg et al. (1983), we will adopt $`Te`$=8000 K for low ionization regions (O<sup>0</sup>, N<sup>+</sup>, S<sup>+</sup>, O<sup>+</sup>) and $`Te`$=10000 K for the others (O<sup>++</sup>, He<sup>+</sup>, C<sup>+</sup>, Ar<sup>++</sup>). Our $`Ne`$ study is essentially based on \[SII\] lines at $`\lambda `$6717 Å and $`\lambda `$6731 Å. To test the reliability of our measurements we can make a comparison with the results of Clegg et al. (1983), who observed at low spectral resolution a patch of nebulosity located at north-west of the central star. At ”position a” they found I(6717)/I(6731)=0.68, corresponding to $`Ne`$=2200 cm<sup>-3</sup> (for $`Te`$=8000 K). ”Position a” practically coincides with the bright western edge of our spectra taken at PA=110<sup>0</sup> (apparent minor axis). The \[SII\] density distribution we observed at this position angle is shown in Figure 5 for the blue-shifted and the red-shifted peaks, and also for the integrated, low resolution spectrum (we have averaged our echellograms over a velocity range of 312 km s<sup>-1</sup>, which corresponds to the spectral resolution used by Clegg et al., 1983). The estimated $`Ne`$ accuracy varies from $`\pm `$15% for high density peaks (generally -but not always- coinciding with the strongest emissions), to $`\pm `$50% for deep valleys (weakest components). For ”position a” the integrated spectrum gives $`Ne=`$2100 cm<sup>-3</sup>, in good agreement with Clegg et al. (1983), but high resolution measurements indicate a density of 3100 cm<sup>-3</sup>. Note that in Figure 5 the maximum density of the integrated spectrum (3000 cm<sup>-3</sup>) is reached in a relatively weak knot projected onto the central star; it corresponds to a peak (3200 cm<sup>-3</sup>) in the blue-shifted component. As a further example, in Figure 6 we present the \[SII\] density distribution observed at PA=155<sup>0</sup>; symbols are as in Figure 5. In this case our N-NW peak is partially superimposed to “position b” of Clegg et al. (1983). Once again their electron density ($`Ne`$=2100 cm<sup>-3</sup>) coincides with our integrated value, but is considerably lower than the peak measured at high resolution ($`Ne`$=2900 cm<sup>-3</sup>). CAVEAT:strictly speaking, our Ne estimates too must be considered as lower limits, due to ionization effects. In fact, as we will see in the next section, the complete recombination of S<sup>++</sup> occurs externally to the top of the density distribution; this implies that the peak of the \[SII\] intensity distribution is slightly outwards with respect to the density maximum, leading to a small underestimate of Ne derived from the \[SII\] line intensity ratio. Besides the density peak along the cross-section of the nebula, our high resolution spectra allow to derive the radial distribution of the electron density using the ”zero-velocity column” of each emission line; this is the central, rest-frame pixel column containing the ionized gas moving in the neighborhood of -30 km s<sup>-1</sup> (mean radial velocity of the whole nebula). In practice, the zero-velocity column contains the material which is expanding normally to the line of sight; it is important because it insulates a definite slide of nebula which is unaffected by the expansion velocity field. The intensity distribution observed in the zero-velocity column of each nebular emission must be cleaned of the broadening effects due to: \- instrumental profile, corresponding to a Gaussian having FWHM=13.5 km s<sup>-1</sup>; \- thermal motions; for $`Te`$=8000 K they amount to 19 km s<sup>-1</sup> for H, 9.5 km s<sup>-1</sup> for He and 5 km s<sup>-1</sup> (or less) for the heavier elements; \- turbulent motions, which affect all lines in the same way; following Meaburn et al. (1996) in NGC 40 they are 6 - 8 km s<sup>-1</sup>; \- fine structure of $`H\alpha `$ and $`\lambda `$5876 Å of HeI. Following Dyson and Meaburn (1971) and Dopita (1972), the seven H$`\alpha `$ components can be shaped as the sum of two equal Gaussian profiles separated by 0.14 Å. $`\lambda `$5876 Å of HeI has six components: five of them are concentrated in the narrow range $`\lambda `$$`\lambda `$5875.5987 - 5875.6403 Å (they were considered a single emission at $`\lambda `$5875.62 Å); the sixth one, located at $`\lambda `$5875.9663 Å, is 4.4 times weaker (and then it was neglected). We have derived the corrected intensity of the zero-velocity columns by deconvolving the observed profile at any given nebular position along the slit into a series of Gaussians spaced of one pixel along the dispersion and having: FWHM=25.8 km s<sup>-1</sup> for H$`\alpha `$, 17.9 km s<sup>-1</sup> for $`\lambda `$5876 $`\AA `$ of HeI and FWHM=16.0 km s<sup>-1</sup> for the other lines. We have cautiously stopped the computation when I<sub>obs</sub>$``$2I<sub>corr</sub>, corresponding to an uncertainty in $`Ne`$ of $`\pm `$30%. The last step is to deconvolve the corrected zero-velocity columns for guiding errors and seeing by means of the profile, along the slit, of the stellar continuum present in the spectrum. At each position of the zero-velocity column the intensity is proportional to $`NeNi`$ and, in the case of total ionization and Te=constant, I$``$$`Ne`$<sup>2</sup>. This is valid, separately, for all ions. As an example, the H$`\alpha `$ and $`\lambda `$6584 Å of \[NII\] zero-velocity columns derived at PA=110<sup>0</sup> (apparent minor axis) are presented in Figure 7; densities are normalized to the strongest peak (left ordinate scale). As expected, $`Ne`$(H<sup>+</sup>) and $`Ne`$(N<sup>+</sup>) practically coincide in the external parts (up to the maxima), and tend to diverge moving inwards, due to the gradual double ionization of nitrogen. Let’s consider the H<sup>+</sup> curve; if we scale the western peak to $`Ne`$=3100 cm<sup>-3</sup>, as previously derived from the \[SII\] line intensity ratio (see Fig. 5), we obtain the true density distribution along the central, zero-velocity column (right ordinate scale).To be noticed that the density of the eastern peak results to be 2600 cm<sup>-3</sup>, in fairly good agreement with the value derived from \[SII\] diagnostics. In the same way we obtained the radial density trends at the other position angles; they are given in Figure 8. Note that the profiles observed at PA=20<sup>0</sup> correspond to the northern ($`Ne`$=1000 cm<sup>-3</sup>) and southern ($`Ne`$=600 cm<sup>-3</sup>) polar caps, and that only a weak emission ($`Ne`$=400 cm<sup>-3</sup>) is visible in the S-SE sector of PA=155<sup>0</sup>. In the end, the ”bell” profile seems to be a general characteristic of the radial matter distribution within NGC 40. A final note on the $`Ne`$ problem in this nebula concerns \[OII\]. We have repeated the previous procedures using the 3726/3729 diagnostic ratio; the \[OII\] density trends are, in all cases, very close to the \[SII\] ones, but the actual density is systematically lower by a factor of almost two. To explain this curious feature, we must examine in detail the ionization structure of NGC 40. ## 5 Radial ionization structure The detailed radial ionization structure of an expanding nebula can be derived by comparing the zero-velocity columns of different lines. To obtain the ionic abundances (relative to H<sup>+</sup>) of O<sup>0</sup>, O<sup>+</sup>, O<sup>++</sup>, N<sup>+</sup>, C<sup>+</sup>, S<sup>+</sup>, He<sup>+</sup> and Ar<sup>++</sup> we have applied the classical procedure (see, for instance, Peimbert and Torres-Peimbert, 1971; Barker, 1978; Aller and Czyzak, 1979, 1983). For all ions, but not \[ArIII\], we have used the same atomic constants adopted by Clegg et al. (1983) in their low resolution study of NGC 40; for \[ArIII\], not mentioned by these authors, transition probabilities were taken from Mendoza (1983) and collision strengths from Galavis et al. (1995). Since the H<sup>+</sup> central column is practically absent in the S-SE sector of PA=155<sup>O</sup> (while other ions are present), only lower limits to the relative ionic abundances can be obtained; the same situation occurs at PA=20<sup>0</sup> (major axis), internally to the polar caps. In all the remaining cases (where the radial ionization structure of the nebula can be studied in detail) most ions present the expected trends: medium-high ionization species (O<sup>++</sup>, Ar<sup>++</sup>, He<sup>+</sup>) rapidly decrease outwards; the opposite for low ionization species, such as O<sup>0</sup>, N<sup>+</sup> and S<sup>+</sup>. Once again, O<sup>+</sup> shows an anomalous behavior (also C<sup>+</sup> has similar characteristics, but the CII $`\lambda `$6578 Å line is so weak that a quantitative analysis is hazardous). The radial ionization structure of NGC 40 is synthesized in Figure 9 for PA=155<sup>0</sup>; this position angle is a perfect compendium of the different situations occurring in the nebula, since it presents a well defined H<sup>+</sup> structure in the N-NW part and practically no hydrogen emission at S-SE (see Figure 8). Moreover, its N-NW peak corresponds to ”position b” of Clegg et al. (1983), so that a direct comparison with their results can be made. Let’s first consider the N-NW sector of Figure 9; here both O<sup>+</sup>/H<sup>+</sup> and O<sup>++</sup>/H<sup>+</sup> increase inwards (while we would expect opposite trends, since the O<sup>+</sup>$``$O<sup>++</sup> equilibrium shifts on the left going outwards, so that when \[OII\] increases \[OIII\] falls, and vice versa). Note also that the O<sup>++</sup> curve lies well below the O<sup>+</sup> one, indicating that the stellar flux is unable to completely ionize O<sup>+</sup>. Ionization structures similar (qualitatively and quantitatively) to the one observed in the N-NW sector of PA=155<sup>0</sup> were obtained at PA=65<sup>0</sup> and 110<sup>0</sup>. Look now at the left part of Figure 9. In the S-SE sector of PA=155<sup>0</sup> the H<sup>+</sup> zero-velocity column shows only a weak emission at about 30$`\mathrm{}`$ from the center (and here the N<sup>+</sup>/H<sup>+</sup> trend was derived), whereas high ionization species are present also at intermediate positions, giving the lower limits to the He<sup>+</sup>/H<sup>+</sup>, C<sup>+</sup>/H<sup>+</sup>, O<sup>+</sup>/H<sup>+</sup> and O<sup>++</sup>/H<sup>+</sup> abundances indicated in the figure. The degree of excitation in these hydrogen-depleted zones remains low, being N(O<sup>+</sup>)/N(O<sup>++</sup>)$``$5. Similar low density, hydrogen-depleted regions are present at PA=20<sup>0</sup> (major axis), internally to the polar caps. An independent confirmation of the existence of a peculiar ionization structure in NGC 40 comes from the broad-band U, V and R imagery obtained with the 3.5m Italian National Telescope (TNG). The R band (Figure 10 upper left) is dominated by H$`\alpha `$ and \[NII\], with a small contamination of \[SII\] at $`\lambda `$6717 Å and $`\lambda `$6731 Å. Over 90% of the U light (Figure 10 upper right) comes from the \[OII\] doublet at $`\lambda `$3726 Å and $`\lambda `$3729 Å; the remaining is due to high Balmer lines. Finally, in the V band (Figure 10 lower left) H$`\beta `$ contributes for 55% and the \[OIII\] doublet at $`\lambda `$4959 Å and $`\lambda `$5007 Å for 45%. Since the H$`\beta `$ morphology of this low excitation planetary nebula is very similar to the H$`\alpha `$+\[NII\] one, we can extract the \[OIII\] component of the V image by subtracting the R image (appropriately scaled in intensity) to the V image. The resulting \[OIII\] appearance of NGC 40, shown in Figure 10 lower right, noticeably differs from the H$`\alpha `$+\[NII\] one; this morphological peculiarity, known since Louise (1981), stands out in great detail (greater than before; see, for instance, Balick, 1987 and Meaburn et al., 1996). The apparent \[OII\]/\[OIII\] distribution over the nebula can be obtained from the ratio U/\[OIII\] and the(H$`\alpha `$+\[NII\])/\[OII\] distribution from R/U; results are shown in Figures 11. The large stratification effects present in these last maps indicate that all the three intensity ratios \[OII\]/\[OIII\], (H$`\alpha `$+\[NII\])/\[OII\] and (H$`\alpha `$+\[NII\])/\[OIII\] decrease inwards, i.e. all the three ionic ratios O<sup>+</sup>/O<sup>++</sup>, (H<sup>+</sup>+N<sup>+</sup>)/O<sup>+</sup> and (H<sup>+</sup>+N<sup>+</sup>)/O<sup>++</sup> decrease inwards. In conclusion: all the three ionic ratios O<sup>++</sup>/O<sup>+</sup>, O<sup>+</sup>/H<sup>+</sup> and O<sup>++</sup>/H<sup>+</sup> increase inwards (being the H<sup>+</sup> distribution very similar to the N<sup>+</sup> one). This is in perfect qualitative agreement with the results obtained from high resolution spectrography. The obvious conclusions emerging from the peculiarities in the ionization structure of NGC 40 are drawn in the following, short section. ## 6 Abundance gradients towards NGC 40. We believe that the radial ionization structure of NGC 40 indicates the presence of abundance gradients across the nebula (although we cannot exclude a minor effect due to electron temperature variations). Take oxygen, for instance. The general expression O=O<sup>0</sup>+O<sup>+</sup>+O<sup>++</sup>+etc. in the case of NGC 40 reduces to O=O<sup>+</sup>, i.e. the observed O<sup>+</sup>/H<sup>+</sup> ratio is a good match of the total oxygen abundance. We obtain O/H=6$`\pm `$1 x10<sup>-4</sup> in the main, denser regions (to be compared with the values of 6.0x10<sup>-4</sup> and 8.4x10<sup>-4</sup> given by Aller and Czyzak, 1983, and Clegg et al., 1983, respectively). O/H rapidly increases inwards; at the observational limit it is larger than 5x10<sup>-3</sup>. Less accurate abundance gradients can be derived in a similar manner for C/H (1x10<sup>-3</sup> to $`>`$3x10<sup>-2</sup>) and for He/H (4x10<sup>-2</sup> to $`>`$1x10<sup>-1</sup>). It is very tempting to attribute these abundance trends to contamination of the stellar wind (we remember that detailed analyses of the optical spectrum of the WC8 nucleus carried out by Leuenhagen et al., 1996, give an upper limit for the mass fraction of hydrogen of 2%; moreover, He/C=0.8 and C/O=5). In this scenario, our internal, low density, hydrogen-depleted regions are essentially constituted of enriched material recently emitted by the central star. Moving outwards, this gas gradually mixes with the shell’s gas and abundances rapidly fall to the nebular values. The general expression I$``$$`NeNi`$ (valid in the zero velocity column) allows to estimate the electron density in the innermost, fast moving, hydrogen-depleted regions. An indicative value of 1 - 2 cm<sup>-3</sup> is obtained, corresponding to a few 10<sup>-24</sup> g cm<sup>-3</sup>, and (for a distance of 1100 pc) to a matter flux of a few 10<sup>-8</sup> M yr<sup>-1</sup>. This must be the order of magnitude of the present central star’s mass-loss. ## 7 Tomography In order to obtain the spatial matter distribution within a planetary nebula we will analyse the observed emission line structures by means of an iterative procedure which is the implementation of the method originally proposed by Sabbadin et al. (1985, 1987 and references therein) for plate echellograms. In this approach, the relative density distribution of an emitting region along the cross-section of nebula covered by the slit is obtainable from the radial velocity, FWHM and intensity profile. The calibration to absolute densities is straightforward if \[SII\] or \[OII\] doublets are present (as in the case of NGC 40); for density bounded planetary nebulae other diagnostics (such as 5517/5537 of \[ClIII\] or 4711/4740 of \[ArIV\]) can be used but, due to the intrinsic weakness of these lines, results are often uncertain; so, for high excitation planetary nebulae, we prefer to adopt the surface brightness method (it will be described in a forthcoming paper dedicated to NGC 1501). The tomographic analysis contains a large amount of physical informations; for reasons of space too, this first application to NGC 40 intends to be a concise test of reliability (i.e. a semi-quantitative comparison with the observational results independently accumulated in the previous sections). Amongst the different maps (intensity, density, fractional ionization, ionic abundance etc.) obtainable with our reconstruction method, we have selected $`\sqrt{NeNi}`$; H<sup>+</sup> was chosen as ionization marker, and N<sup>+</sup> and O<sup>++</sup> to symbolize low and high excitation zones, respectively. Since for H<sup>+</sup> we have that $`\sqrt{NeNi}`$ is proportional to Ne, the H<sup>+</sup> maps give the relative density distributions along the cross-sections of NGC 40 covered by the slit; this is roughly true also for N<sup>+</sup> (its ionization occurring only in the innermost parts), but not for O<sup>++</sup>, due to the incomplete O<sup>+</sup> ionization. The gray-scale $`\sqrt{NeNi}`$ maps derived for H<sup>+</sup>, O<sup>++</sup> and N<sup>+</sup> at PA=110<sup>o</sup>, 20<sup>o</sup>, 65<sup>o</sup> and 155<sup>o</sup> are shown in Figure 12. At PA=110<sup>o</sup> (apparent minor axis) H<sup>+</sup> describes a circular, knotty ring; N<sup>+</sup> closely mimics H<sup>+</sup> (but, obviously, it is a bit sharper), while O<sup>++</sup> is emitted in the internal regions and rapidly drops when $`Ne`$ raises. To be noticed the opposite behaviors of H<sup>+</sup> and O<sup>++</sup>, in the sense that bright H<sup>+</sup> zones tend to be associated with faint O<sup>++</sup> regions, and vice versa; the patch studied by Clegg et al. (1983, their “position a”) corresponds to our western edge. At PA=20<sup>o</sup> (apparent major axis) the H<sup>+</sup> and N<sup>+</sup> ring structures appear disrupted at north and south, where the two polar caps are formed; unfortunately, in these reproductions we have lost the two extremely faint (and fast moving) low excitation mustaches attached to the main nebular body (blue-shifted at north, red-shifted at south). The elliptical structure shown at this position angle by O<sup>++</sup> testifies the existence of hydrogen-depleted zones internally to the polar caps (which, in our opinion, are essentially constituted of stellar wind). The $`Ne`$ map derived from H<sup>+</sup> at PA=65<sup>o</sup> describes a distorted ring presenting a deformation in the approaching gas of the E-NE quadrant; the bright knots visible in both the H<sup>+</sup> and N<sup>+</sup> maps completely disappear in the O<sup>++</sup> one, which presents a more homogeneous distribution. Finally, at PA=155<sup>o</sup> the ring structure shown by H<sup>+</sup> and N<sup>+</sup> in the N-NW sector is completely absent at S-SE; once again, O<sup>++</sup> indicates the presence of hydrogen-depleted gas within the S-SE region; the N-NW border corresponds to “position b” of Clegg et al. (1983). Quantitative maps, giving the true electron density distribution at the four position angles, can be obtained by calibrating the H<sup>+</sup> maps of Fig. 12 with the \[SII\] line intensity ratio. Results are shown in Figure 13, where the nebular centre corresponds to the position of the exciting star (which, actually, lies 2$`\mathrm{}`$ over or below the plane of the figure). ## 8 Discussion The general rule given by Olin Wilson (1950) in his classical work on the expansion velocity of planetary nebulae -”high-excitation particles show smaller separation, and low-excitation particles higher separation”- is definitely violated in NGC 40, which presents a positive inwards velocity gradient. Moreover, this velocity gradient is small enough ($`\mathrm{\Delta }V34`$ km s<sup>-1</sup> for the main shell) to rouse the suspicion that some nebular lines can be produced by resonance scattering. The problem has been analysed in detail by Clegg et al. (1983), who tried to explain the abnormal CIV $`\lambda `$1549 Å intensity observed in NGC 40 in terms of stellar light scattered by the nebula. Assuming a spread in velocity of 20 km s<sup>-1</sup>, these authors obtained that the stellar flux at $`\lambda `$1549 Å is nine times larger than the $`\lambda `$1549 Å nebular scattered flux and concluded that scattering is inadequate, but pointed out that ”this mechanism will be most effective if velocity gradients are small”. Decidedly, our new $`\mathrm{\Delta }V`$ value re-opens the question. We have seen that the peculiarities found in the ionization structure strongly suggest the presence of abundance gradients within NGC 40; the responsible of these gradients is the fast stellar wind. This is not a surprise since, following Bianchi and Grewing (1987), the mass of the central star is close to the C-O core of the progenitor star. It is interesting to note the discrepancy found by these authors between the stellar temperature determined from IUE observations (90000 K) and the much lower value of 30000 K inferred from the nebular ionization. They overcame the impasse with a carbon curtain screening the high energy photons; this 2 - 5x10<sup>18</sup> cm<sup>-2</sup> CII column density should be located at the inner edge of the shell, where the fast wind from the nucleus interacts with the nebular gas. The hypothesis is suggestive, but some doubts remain, essentially for lack of space. In fact, on one side our ionization maps indicate that the CII layer producing the requested column density must have a thickness of the order of 10<sup>19</sup> cm, on the other side the mean nebular radius of NGC 40 (assuming a distance of 1100 pc, centre of the large number of individual and statistical distances reported in the literature) is 0.08 pc, i.e. 40 times shorter than the CII layer. An even more unfavorable situation occurs for a nebular distance of 980 pc, as adopted by Bianchi and Grewing (1987). We wish to stress here the great opportunities opened by the analysis of the zero-velocity columns in expanding nebulae (planetary nebulae, shells around Population I W-R stars, supernova remnants etc.): for the first time we can insulate a definite slide of nebula, thus removing the tiresome projective problems related to these extended objects. To be noticed the strict analogy between the zero-velocity column obtained from slit spectroscopy and the rest-frame image derived from Fabry-Perot interferometry; in a certain sense, the rest-frame image is a zero-velocity column extended at all position angles (and this represents a considerable observational advantage). Moreover, the expansion velocity of a few objects (for instance the Crab nebula) is so large that the “zero-velocity column” analysis can be applied to very low resolution data (slit spectroscopy or Fabry-Perot interferometry). Concerning our tomographic results, we confess a genuine satisfaction for the quality of these ”preliminary” maps (clearly, the final goal of each tomographic study is a complete, three-dimensional model, but this will need a more detailed spectroscopic coverage). Our figures confirm that the main body of NGC 40 has an inhomogeneous, elongated barrel-shaped structure (”an opened-up ellipse” following Mellema, 1995; ”a slightly tilted cylindrical sleeve” following Balick et al., 1987), with thin arcs emerging at both ends of the major axis. This morphology is normally interpreted as the result of the interaction of the fast stellar wind from the high temperature central star with the slow, inhomogeneous (i.e. denser in the equatorial plane) super-wind from the AGB progenitor (see Balick et al., 1987, Mellema, 1995 and references therein). Following Balick et al. (1987) the stellar wind creates in NGC 40 an interior bubble at high temperature and ionization. The hot gas cools by expansion or radiative losses in forbidden lines when it interacts with the dense, external shell; \[OIII\] emissions arise in a relatively smooth, low density interface between the bubble gas and the H$`\alpha `$+\[NII\] filaments. As noticed by Meaburn et al. (1996), in this case the \[OIII\] image is expected to follow the H$`\alpha `$+\[NII\] one, and both are dominated by the swept-up shell. The alternative explanation proposed by Meaburn et al. is based on Mellema’s (1995) models: the nebular material is swept-up both by the fast wind and by the ionization front. So, in the first evolutionary phases, when the central star temperature is still relatively low, the two shells appear as distinct features: H$`\alpha `$ and \[NII\] prevail in the shell swept-up by the H ionization front, and \[OIII\] in the wind-swept shell. In the specific case of NGC 40, the second, faster shell has almost undertaken the first one. Which model is right? The answer provided by our data is ambiguous: on one side we obtain that the \[OIII\] and the H$`\alpha `$+\[NII\] emitting regions are disconnected (in many cases they are opposite, i.e. strong \[OIII\] is associated with faint H$`\alpha `$+\[NII\], and vice versa); on the other side the density distribution across the shell never shows evidence of a double peaked structure implicit in Mellema’s models. So, for the present, we can only say that: \- in the equatorial region of NGC 40 the stellar wind is blocked by the dense nebular gas and \[OIII\] emissions occur in the innermost, faster expanding part of the ”bell” density profile; \- at the poles, where densities are lower than at the equator, the braking effect by the nebular gas is less efficient and \[OIII\] appears in extended zones, internally to the polar caps. A question arises: since we observe modest expansion velocities in the nebular gas, where (and how) is decelerated the stellar wind, whose original speed is of about 2000 km s<sup>-1</sup>? It is evident that further, more accurate observations are needed not only for the nebula, but also for the central star. At the moment, mass-loss rates and terminal wind velocities reported in the literature for the WC8 nucleus of NGC 40 span in the range 3x10<sup>-8</sup> M yr<sup>-1</sup> and 2600 km s<sup>-1</sup> (Cerruti-Sola and Perinotto, 1985) to 1x10<sup>-6</sup>M yr<sup>-1</sup> and 1800 km s<sup>-1</sup> (Bianchi, 1992). Moreover, the presence of wind fluctuations was recently discovered by Balick et al. (1996) and Acker et al. (1997). The ionized nebular mass obtained from the observed H$`\beta `$ flux (Sabbadin et al., 1987) results to be M<sub>ion</sub>= 2 - 5 x10<sup>-2</sup> M (depending on the adopted values of F(H$`\beta `$), c(H$`\beta `$), distance, angular radius, electron density and electron temperature), and a conservative upper limit to the nebular mass pumped up by the fast stellar wind can be put at 1x10<sup>-2</sup> M. Even in the most favorable case (i. e. momentum conservation), Bianchi’s wind (10<sup>-6</sup> M yr<sup>-1</sup> and 1800 km s<sup>-1</sup>) needs less than 20 years to produce the observed nebular acceleration (and in this case, where, and how, is dissipated the wind luminosity, amounting to 500 L, a value which is at least ten times the nebular luminosity?). A longer interaction time (500 years) is obtained with the mass-loss rate and wind velocity reported by Cerruti-Sola and Perinotto (1985); in this case the wind luminosity is 33 L, of the same order of magnitude of the nebular luminosity. A final question concerns NGC 40 as prototype of a small group of very low excitation planetary nebulae powered by “late” WR stars: are the peculiarities found in NGC 40 (in particular, the positive inwards velocity gradient and the radial chemical composition gradient) a common characteristic of these nebulae (e. g. SwSt 1, PM 1-188, Cn 3-1, BD+30<sup>o</sup>3639, He 2-459, IRAS 21282, M 4-18, He 2-99, He 2-113, He 2-142, He 3-1333, Pe 1-7 and K 2-16)? Though a definitive answer will need accurate imagery and spectroscopy of a suitable sample of objects, a first indication comes from the very recent results by Bryce and Mellema (1999), showing that BD+30<sup>o</sup>3639 expands faster in \[OIII\] than in \[NII\]. ## 9 Conclusions Long-slit echellograms at different position angles of the bright core of NGC 40 allowed to analyse the expansion velocity field (the higher is the ionization, the faster is the motion), the radial density distribution (a ”bell” profile reaching $`Ne`$=4000 cm<sup>-3</sup>), the radial ionization structure (peculiar in the innermost parts), and the chemical abundances (a gradient is present, due to contamination of the hydrogen-depleted stellar wind). Moreover, tomographic maps are obtained, giving the spatial matter distribution along the cross-sections of nebula covered by the slit. All these observational results, discussed within the interacting winds model, point out our incomplete, “nebulous” knowledge of the physical processes shaping this peculiar planetary nebula where “the tail wags the dog” (as wittily suggested by Prof. Lawrence Aller, private communication). ###### Acknowledgements. We wish to express our gratitude to Professors Lawrence Aller and Manuel Peimbert and to Drs. Luciana Bianchi, Francesco Strafella and Vittorio Buondí for help, suggestions and encouragements. We thank the whole technical staff -in particular the night assistants- of the Astronomical Observatory of Asiago at Cima Ekar for their competence and patience. This paper is partially based on observations made with the Italian Telescopio Nazionale Galileo (TNG) operated on the island of La Palma by the Centro Galileo Galilei of the CNAA (Consorzio Nazionale per l’Astronomia e l’Astrofisica) at the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias.
no-problem/9912/nucl-th9912014.html
ar5iv
text
# ACKNOWLEDGMENTS ## ACKNOWLEDGMENTS This work was supported by DFG, BMBF, GSI. We thank L.P. Csernai, M. Gyulassy, I.N. Mishustin, D.H. Rischke, and L. Satarov for numerous interesting discussions. A.D. acknowledges support from the DOE Research Grant, Contract No. De-FG-02-93ER-40764.
no-problem/9912/astro-ph9912163.html
ar5iv
text
# Two Stellar Mass Functions Combined into One by the Random Sampling Model of the IMF ## 1 Introduction Recent theoretical models suggest that the stellar initial mass function may result from star formation in hierarchically-structured, or multifractal clouds that are characterized by a local conversion time of gas into stars that is everywhere equal to the dynamical time on the corresponding mass scale (Elmegreen 1997, 1999a, hereafter papers I and II). This model gives the Salpeter power law slope, which was suggested to be the best representation of the IMF at intermediate to high mass (see reviews of observations in Scalo 1986, 1998; papers I, II, and Massey 1998), and it gives a break from that slope to a flattened or possibly decreasing part at lower mass where interstellar clumps cannot become self-gravitating during normal cloud evolution. This break point may be at the thermal Jeans mass, given by the Bonner-Ebert critical mass $$M_J=0.35\left(\frac{T}{10K}\right)^2\left(\frac{P}{10^6k_B}\right)^{1/2}\mathrm{M}_{}$$ (1) for temperature $`T`$ and total pressure $`P`$ (thermal + turbulent + magnetic) in the star-forming region. The thermal Jeans mass is fairly constant from region to region in normal galaxies (Paper II), or at least as constant as the IMF is observed to be, considering the limited data on the low-mass flattened part. Yet the characteristic mass $`M_J`$ may be varied, if needed, to account for a high mass bias in starburst regions (Rieke et al. 1980) or the early Universe (Larson 1998) if $`T^2`$ increases more than $`P^{1/2}`$. Such a change might be expected in high density regions because they have more intense and concentrated star formation. A general lack of understanding of the details of star formation during the final stages has limited the success of the model so far to the power law range, and to all of the implications of stochastic, rather than parameterized, star formation (Papers I and II). The low mass part of the IMF had not been observed well anyway, so the original models made no effort to fit any data there. There is a growing consensus, however, that the low mass IMF becomes approximately flat (slope=0) over a significant range in mass when plotted as a histogram of the log of the number of stars per unit logarithmic mass interval versus the log of the mass. In such a plot, the Salpeter (1955) slope is $`1.35`$. This flattening was suspected over two decades ago (Miller & Scalo 1979), but the most recent observations are much clearer (Comeron, Rieke, & Rieke 1996; Macintosh, et al. 1996; Festin 1997; Hillenbrand 1997; Luhman & Rieke 1998; Cool 1998; Reid 1998; Lada, Lada & Muench 1998; Hillenbrand & Carpenter 1999). As a result of these new observations, we suspect that the physical origin of the low mass IMF can be understood in statistical terms to the same extent as the power law part. This paper offers one possible explanation for the flattening of the IMF at low mass that may be easily tested with modern observations of star formation in resolved clumps. This explanation is made here after first reviewing the origin of the power law part of the IMF at intermediate to high mass. ## 2 The Random Sampling Model Diffuse interstellar clouds and the pre-star formation parts of self-gravitating clouds are generally structured in a hierarchical fashion with small clumps inside larger clumps over a wide range of scales (see reviews in Scalo 1985; Elmegreen & Efremov 2000; for a review of cloud fractal structure, see Stützki 1999). Stellar masses occur in the midst of this clump range, neither at the smallest nor the largest ends, and there is no distinction in the non-self-gravitating (pre-stellar) clump spectrum indicating where the stellar mass spectrum will eventually lie. These observations imply that the basic environment for star formation is scale-free, so the mass scale for stars has to come from specific physical processes. In papers I and II, we proposed that the mass scale arises because of the need for self-gravity to dominate the most basic of all forces, thermal pressure, and this leads to the $`M>M_J`$ constraint discussed above. In fact, the break point in the power law part of the IMF arises at about $`M_J`$ for normal conditions. The Salpeter power-law portion of the IMF was then proposed to arise as stars form out of gas structures that lie at random levels in the hierarchy (where $`M>M_J`$). Perhaps the physical process that initiates this is turbulence compression followed by gravitational collapse of the compressed slab (Elmegreen 1993; Klessen, Heitsch, & MacLow 2000), or perhaps it is an external compression acting on a certain gas structure. Regardless, the star formation process samples the hierarchical structure of the cloud in this model, and builds up a stellar mass spectrum over time. This process of sampling where and when a particular star forms can only be viewed as random at the present time, just as the time and place of rain in a terrestrial cloud pattern is random and given only as a probability in most weather forecasts. The detailed star formation processes are not proposed to be random, only the manifestation of them, considering that turbulence is too complex for initial and boundary conditions to be followed very far over time and space with enough certainty to produce a predictable result. Thus we discuss here and in the previous two papers only the probability of sampling from various hierarchical levels, and we use this probability to generate a stochastic model that builds up the IMF numerically after many random trials. From this point of view, the basic form of the IMF is fairly easy to understand, even though the final slope has not been derived analytically (i.e., it has been obtained only by running the computer model). For example, if the sampling process has a uniform probability over all hierarchical levels, then the mass spectrum is $`M^2dM=M^1d\mathrm{log}M`$ exactly (e.g., Fleck 1996; Elmegreen & Falgarone 1996). A dynamically realistic cloud will not sample in such a uniform way, however. It will be biased towards regions of higher density as they evolve more quickly. For most dynamical processes that precede star formation, including self-gravitational contraction, magnetic diffusion, and turbulence, the rate of evolution scales with the square root of the average local density. This means that lower mass clumps are sampled more often than higher mass clumps. As a result, the mass spectrum steepens to $`M^{1.15}d\mathrm{log}M`$. The extra $`0.15`$ in the power comes from $`(D3)/2D`$ for fractal dimension $`D`$ (Paper I). In addition to this steepening from density weighting, there is another steepening effect from mass competition. Once a small mass clump turns into an independent, self-gravitating region, the larger mass clump that surrounds it has less gas available to make another star. If it ever does make a star, then the mass of this star will be less than it would have been if the first star inside of it had never formed. After these two steepening effects, the IMF becomes $`M^{1.35}d\mathrm{log}M`$, which is the Salpeter function. ## 3 The Flat Part of the IMF The process of random, weighted sampling from hierarchical clouds changes below the thermal Jeans mass because most of the clumps there cannot form stars at all: there is not enough self-gravity to give collapse no matter how the clump is put together. As a result, the process of star formation stops at sufficiently low clump mass. This does not mean that stars smaller than $`M_J`$ cannot form. Each collapsing clump turns into one or more stars with some efficiency less than unity, so although the average star mass is proportional to the clump mass in this model, the actual star mass can be considerably less. There should even be a range of stellar masses coming from each self-gravitating clump of a given mass, depending on how the clump divides itself into stars and how much disk and peripheral gas gets thrown back without making stars. We have now discovered a curious aspect to this random sampling model when the conversion of clump mass into star mass is considered explicitly. That is, the probability distribution function for this conversion reveals itself clearly below the characteristic mass, and is, in fact, identical to the observed form of the IMF there. We also find that this clump-to-star distribution function can apply equally well above and below $`M_J`$, in a self-similar way, but that it only shows up below this mass. Thus the power law part of the IMF is nearly independent of the details of how a clump gets converted into stars (as long as the dynamical rate is involved) and depends primarily on the cloud structure, whereas the flat part of the IMF depends exclusively on the details of clump-to-star conversion and is independent of the nature of the cloud’s structure. To describe this process mathematically, we suppose that each self-gravitating clump of mass $`M_c`$ makes a range of star masses $`M_s`$ such that the probability distribution function, $`P(ϵ)`$, of the relative star mass, $`ϵ=M_s/M_c`$, is independent of $`M_c`$. This is consistent with the self-similarity of star formation that is assumed for the rest of the IMF model. The distribution function is written for logarithmic intervals as $`P(ϵ)d\mathrm{log}ϵ`$. The basic point of this paper is that $`P(ϵ)`$ must be approximately constant for all clump masses to give the flat part of the IMF at low mass. The final mass function for stars, in logarithmic intervals, $`n_s(M_s)d\mathrm{log}M_s`$, can now be written in terms of the mass function for self-gravitating, randomly-chosen clumps, $`n_c(M_c)d\mathrm{log}M_c`$, as $$n_s\left(M_s\right)=_{ϵ_{min}}^{ϵ_{max}}P(ϵ)n_c\left(M_s/ϵ\right)d\mathrm{log}ϵ.$$ (2) The upper limit to the integral, $`ϵ_{max}`$, is the largest relative mass for a star that is likely to form from a self-gravitating clump. It is perhaps slightly less than unity when $`M_s>>M_J`$, although its precise value is not necessary here. We denote it by the constant $`ϵ_{max,0}`$ and take $`ϵ_{max}=ϵ_{max,0}`$ when $`M_s>M_Jϵ_{max,0}`$. When $`M_s<M_Jϵ_{max,0}`$, the efficiency can be at most $`M_s/M_J`$ since the smallest clump that can form stars is $`M_J`$. Thus $$ϵ_{max}=\frac{M_s}{MAX(M_J,M_s/ϵ_{max,0})}.$$ (3) The lower limit to the integral in equation (2), $`ϵ_{min}`$, is not known yet from observations. The ratio of $`ϵ_{max,0}`$ to $`ϵ_{min}`$ will turn out to be the mass range for the flat part of the IMF, which is denoted by $``$. It may have a value of $`10`$ or more according to recent observations (Sect. I), but future observations of lower mass brown dwarfs could extend the flat part further and make $``$ larger. In any case, we take $$ϵ_{min}=ϵ_{max,0}/;$$ (4) $`ϵ_{min}`$ does not depend on whether $`M_s`$ is greater than or less than $`M_Jϵ_{max,0}`$. To solve the integral in equation (2), we convert $`ϵ`$ to $`M_s/M_c`$ and $`d\mathrm{log}ϵ=d\mathrm{log}M_c=dM_c/M_c`$ for constant $`M_s`$ from the left hand side. Then, for logarithmic intervals in $`M_s`$, $$n_s(M_s)=P_0n_0_{M_{c,min}}^{M_{c,max}}M_c^{1x}𝑑M_c$$ (5) where $`P_0=P(ϵ)=1/d\mathrm{log}ϵ=1/\mathrm{log}`$ for all $`M_c`$, and $`n_0`$ comes from the constant in the clump function, $`n(M_c)=n_0M_c^x`$; $`x=1.35`$ from the random sampling model and is the same as the power law in the Salpeter function. The integral limits are $`M_{c,min}=M_s/ϵ_{max}=MAX(M_J,M_s/ϵ_{max,0})`$ and $`M_{c,max}=M_s/ϵ_{min}=M_s/ϵ_{max,0}`$. The solution to equation (5) depends on whether $`M_Jϵ_{max,0}`$ is greater than or less than $`M_s`$. For $`M_s>M_Jϵ_{max,0}`$, $$n_s(M_s)=\frac{P_0n_0}{x}\left(\frac{ϵ_{max,0}}{M_s}\right)^x\left(1^x\right)M_s^x.$$ (6) For $`M_Jϵ_{max,0}/<M_s<M_Jϵ_{max,0}`$, $`n_s(M_s)={\displaystyle \frac{P_0n_0}{xM_J^x}}\left[1\left({\displaystyle \frac{M_Jϵ_{max,0}}{M_s}}\right)^x\right]\mathrm{constant},`$ (7) and for $`M_sM_Jϵ_{max,0}/`$, this latter result is zero. At $`M_s=M_Jϵ_{max,0}`$, the expressions in equations (6) and (7) are the same. A graph of $`n_s(M_s)`$ from these equations is shown in figure 1 for $`=10`$, $`ϵ_{max,0}=1`$, and $`M_J=0.3`$ M. Figure 2 shows the result better using the numerical model with random sampling described in Papers I and II, but now with a stellar mass equal to a random fraction $`ϵ`$ of the chosen clump mass. This random fraction is equally distributed over a logarithmic range, as specified in the above discussion, by using the equation $`ϵ=^{r1}`$ for random variable $`r`$ that is distributed uniformly over $`[0,1]`$. Four different values of $``$ are used in the figure to show how the length of the flattened part equals $``$. The $`=1`$ case shows the pure clump selection spectrum, as in papers I and II, but with a sharper lower mass cutoff than in the other papers, taken here from a failure probability $`P_f=exp\left(\left[M_c/M_J\right]^4\right)`$. We also took $`ϵ_{max,0}=1`$ for simplicity; the exact value is not important (it occurs in the expression for $`ϵ(r)`$). For all of the cases, there are 10 levels of hierarchical structure in the cloud model, with an average of 2 subpieces per piece, and an actual number of subpieces per piece distributed as a Poisson variable over the interval from 1 to 4. The results indicate that for $`M_s>M_Jϵ_{max,0}`$, the model IMF is a power law with the same power as the clump mass spectrum obtained previously. The distribution $`P\left(ϵ\right)`$ does not affect the IMF above the break point even though it applies there. This is because each chosen clump makes a wide range of stellar masses and contributes a flat component of width $``$ to the local IMF, but the sum of the number of stars at each stellar mass is still a power law with the same power as the clump spectrum, independent of $`P\left(ϵ\right)`$. The IMF becomes flat below $`M_Jϵ_{max,0}`$ because all of the stars there come from clumps with masses near $`M_J`$. In fact, the decreasing nature of the clump spectrum toward higher masses makes clumps with masses very close to $`M_J`$ the favored parents for stars with masses less than $`M_Jϵ_{max,0}`$. Thus the low mass IMF is determined entirely by the separate mass spectrum for stars that form inside each self-gravitating clump. Evidently, there are two mass spectra for star formation, one coming from cloud structure and clump selection, giving the Salpeter function, and another coming from star formation inside each clump, giving the flat part. The latter function actually applies everywhere, but it does not visibly affect the Salpeter distribution for intermediate to high masses because stars there have a wide range ($`=`$) of clump masses for parents. It only appears for star masses that form from the lowest mass clumps that can make a star. The model IMF decreases sharply below the smallest star mass that can form in a clump of mass $`M_J`$, which is $`M_Jϵ_{max,0}/`$. The parameters $`ϵ_{max,0}`$ and $``$, as well as the function $`P(ϵ)`$, depend on the physics of star formation, unlike $`x`$ in the power law part of the IMF, which depends primarily on the physics of prestellar cloud structure. ## 4 Conclusions The flat part of the IMF is proposed to result from the distribution of the ratio of star mass to clump mass, which must also be flat in logarithmic intervals. Such a distribution shows up only below the mass of the smallest unstable clump mass, so in principle, it might only apply there physically. However, it could apply equally well for all clumps in a star-forming cloud because it does not actually reveal itself at masses above this threshold. The low mass flattening and eventual turnover below the brown dwarf range depend in unspecified ways on the detailed physics of star formation, which also contribute to the single characteristic mass, $`M_J`$. The slope of the power law part of the IMF, above $`M_J`$, depends primarily on cloud structure, although the same complexities of star formation probably apply there too. This is why the Salpeter IMF appears in so many diverse environments: the universal character of turbulent cloud structure determines the power law nature and the power law itself for intermediate to high mass stars, independent of the details of star formation. Variations in the IMF from diverse physical conditions should be much more pronounced at low mass. This model may be checked observationally by determining the range and distribution function for final star mass in resolved clumps of a given mass. The resolution of the clumps is important so one can be sure the clumps belonging to each individual star or binary pair are measured, and not confused with larger clumps that may include substructure associationed with separate stars. If this model is correct, then the star mass distribution for clumps of a given mass will be much flatter than the Salpeter function, and this may be the case for all clump masses, even those containing intermediate to high mass stars where the Salpeter function applies to the ensemble of stars coming from clumps of different masses.
no-problem/9912/hep-ph9912435.html
ar5iv
text
# References RAL-TR/1999-086 hep-ph/9912435 20 December 1999 CP and T Violation in Neutrino Oscillations and Invariance of Jarlskog’s Determinant to Matter Effects P. F. Harrison Physics Department, Queen Mary and Westfield College Mile End Rd. London E1 4NS. UK <sup>1</sup><sup>1</sup>1E-mail:p.f.harrison@qmw.ac.uk and W. G. Scott Rutherford Appleton Laboratory Chilton, Didcot, Oxon OX11 0QX. UK <sup>2</sup><sup>2</sup>2E-mail:w.g.scott@rl.ac.uk ## Abstract Terrestrial matter effects in neutrino propagation are $`T`$-invariant, so that any observed $`T`$ violation when neutrinos pass through the Earth, such as an asymmetry between the transition probabilities $`P(\nu _\mu \nu _e)`$ and $`P(\nu _e\nu _\mu )`$, would be a direct indication of $`T`$ violation at the fundamental level. Matter effects do however modify the magnitudes of $`T`$-violating asymmetries, and it has long been known that resonant enhancement can lead to large effects for a range of plausible values of the relevant parameters. We note that the determinant of the commutator of the lepton mass matrices is invariant under matter effects and use this fact to derive a new expression for the $`T`$-violating asymmetries of neutrinos propagating through matter. We give some examples which could have physical relevance. Evidence for oscillations of atmospheric neutrinos indicates that at least muon and tau neutrinos participate in lepton mixing with large amplitudes. Other evidence of a deficit of detected solar neutrinos indicates that electron neutrinos must also participate in lepton mixing, and the most recent data disfavour small mixing angles. The participation of all three species of neutrinos in lepton mixing raises the possibility of $`CP`$ and $`T`$ violations in neutrino oscillations, and, given the recent evidence of direct $`CP`$ violation in the hadronic sector , it would perhaps be surprising if such effects were not also manifested in the leptonic sector. The emergence of large mixing parameters in the leptonic sector offers the exciting prospect of potentially large $`CP`$\- and $`T`$-violating asymmetries. One possibility which has not been excluded is that $`CP`$ and $`T`$ violations are maximal for neutrinos in vacuum. A number of other authors have explored the phenomenology of $`CP`$ and $`T`$ violation in neutrino oscillations for several different scenarios of lepton mass and mixing parameters. In both solar and atmospheric experiments, neutrinos can traverse a significant fraction of the Earth. As long-baseline accelerator and reactor experiments push to still longer baselines, the amount of material traversed by man-made neutrino beams also increases. This makes the understanding of matter effects in neutrino oscillations essential in order to determine the complete pattern of neutrino masses and mixings. The particle/anti-particle non-invariance of the matter term in the effective Hamiltonian for neutrino propagation in matter means that observation of a particle-antiparticle asymmetry does not necessarily indicate a fundamental $`CP`$-violation, although it is, in principle, possible to discriminate between the two sources of such effects using the observed distance- and energy-dependence . However, matter effects in neutrino propagation between two points at the surface of the Earth are $`T`$-symmetric, and any observed $`T`$ violation, such as an asymmetry between the transition probabilities $`P(\nu _\mu \nu _e)`$ and $`P(\nu _e\nu _\mu )`$ would certainly be a direct indication of $`T`$ violation (and via $`CPT`$ invariance, of $`CP`$ violation) at the fundamental level. Despite the fact that matter effects in the Earth are $`T`$-invariant, they have a non-trivial effect on the signature of fundamental $`T`$ violations of $`\nu `$ and $`\overline{\nu }`$ separately, as well as on $`CP`$ violations, through their influence on the effective neutrino mass and mixing parameters. In fact, it has long been known that very large resonant enhancements of the $`T`$-violating asymmetries are possible in terrestrial matter. In this paper, we derive a simple new result for the effect of matter on the $`T`$\- and $`CP`$-violating asymmetries, and explore some of the consequences in experimentally preferred scenarios of mass and mixing parameters. In the case of three generations of massive neutrinos, in a weak interaction basis which diagonalises the charged lepton mass matrix, $`M_{\mathrm{}}=D_{\mathrm{}}`$, the neutrino mass matrix, $`M_\nu `$ is in general, an arbitrary 3x3 matrix. The Hermitian square of the neutrino mass matrix, $`M_\nu M_\nu ^{}`$, may be diagonalised to find its eigenvalues, and its eigenvectors form the columns of the lepton mixing matrix, U. It is well-known that under these circumstances, neutrinos propagating in vacuum undergo flavour oscillations, and furthermore, in general, these result in $`CP`$\- and $`T`$-violating asymmetries. The $`CP`$\- and $`T`$-violating asymmetries in the transition probabilities are given (for arbitrary mixing matrix) by the universal function $`P(\nu _\alpha \nu _\beta )P(\overline{\nu }_\alpha \overline{\nu _\beta })`$ $`=`$ $`P(\nu _\alpha \nu _\beta )P(\nu _\beta \nu _\alpha )`$ (1) $`=`$ $`16J\mathrm{sin}(\mathrm{\Delta }_{12}L/2)\mathrm{sin}(\mathrm{\Delta }_{23}L/2)\mathrm{sin}(\mathrm{\Delta }_{31}L/2)`$ (2) for any pair of flavour indices $`\alpha `$, $`\beta `$, where $`J`$ is Jarlskog’s mixing matrix-dependent invariant and the $`\mathrm{\Delta }_{ij}=(\lambda _i\lambda _j)`$ are the three differences of eigenvalues of the Hamiltonian, $`H`$. In the vacuum case, it is sufficient to set $$H=M_\nu M_\nu ^{}/2E$$ (3) so that $`\lambda _i=m_i^2/2E`$, where $`m_i`$ is the mass of the $`i`$th neutrino mass eigenstate (as usual, we number the eigenstates in increasing order of mass). We note that the parameter $`J`$ has a maximal value of $`1/(6\sqrt{3})`$, and that the product of the three sine functions (the arguments are not independent) has a maximal value of $`3\sqrt{3}/8`$, which it takes when all three arguments are separated by $`120^{}`$ . The maximum magnitude of the product of three sines is controlled by the smallest of the three arguments, $`\mathrm{\Delta }_{12}L/2`$, and we can consider $`2/\mathrm{\Delta }_{12}`$ as the reduced wavelength of the asymmetry. The asymmetry is observable only once this term has developed a significant phase, and if it is furthermore not averaged to zero by resolution effects. This fact limits the observability of $`CP`$ and $`T`$ violations in neutrino oscillations in vacuum to a window of parameter-space, as pointed-out in Ref. . In the case that the neutrino beam passes through matter of uniform density, the Hamiltonian is modified to $`H^{}=H+\mathrm{\Delta }H`$, where $$\mathrm{\Delta }H=\pm \sqrt{2}GN_e\left(\begin{array}{ccc}1& 0& 0\\ 0& 0& 0\\ 0& 0& 0\end{array}\right).$$ (4) The $`\pm `$ sign is to be taken as $`+`$ for neutrino and $``$ for anti-neutrino propagation, and makes explicit the particle/anti-particle asymmetry introduced by matter effects. The effect of matter is therefore to modify the mass eigenvalues and the mixing matrix elements, compared with their vacuum values. The matter electron density, $`N_e`$, lies, for the Earth, in the range $`0\stackrel{<}{}N_e\stackrel{<}{}6.2N_A\mathrm{cm}^3`$. The expression, Eq. (2), for the $`T`$-violating asymmetries is still valid in the case of propagation through matter of uniform density, except that the eigenvalues and the parameter $`J`$ appearing in the expression are modified to their matter values $`\lambda _i^{}`$ and $`J^{}`$ respectively. For the $`CP`$-violating asymmetry between neutrinos and anti-neutrinos, the situation is made much more complicated by the particle/anti-particle asymmetry of the matter term in the Hamiltonian, and there are terms additional to that on the right-hand side of Eq. (2) . Jarlskog has given an easy way to calculate the $`CP`$\- and $`T`$-violation parameter, $`J`$, in terms of the mass matrices and their eigenvalues. This may be written: $$2\mathrm{\Delta }_{12}\mathrm{\Delta }_{23}\mathrm{\Delta }_{31}J=\mathrm{Im}\{\mathrm{Det}[M_{\mathrm{}}^2,H]\}/(\mathrm{\Delta }_{e\mu }\mathrm{\Delta }_{\mu \tau }\mathrm{\Delta }_{\tau e})$$ (5) where the $`\mathrm{\Delta }_{\mathrm{}\mathrm{}^{}}`$ on the RHS refer to the differences between the squared charged lepton masses. This formula is valid in any weak basis, but it is useful for our purposes to evaluate the RHS in the weak basis which diagonalises the charged lepton mass matrix, in which it can be written: $$\mathrm{Im}\{\mathrm{Det}[D_{\mathrm{}}^2,H]\}/(\mathrm{\Delta }_{e\mu }\mathrm{\Delta }_{\mu \tau }\mathrm{\Delta }_{\tau e})=2\mathrm{I}\mathrm{m}(H_{12}H_{23}H_{31}).$$ (6) It is easy to see that use of the vacuum Hamiltonian, $`H`$, or the matter-modified one, $`H^{}`$ in Eq. (6) leaves the result invariant, as $`\mathrm{\Delta }H`$ is diagonal in this basis. In fact, more generally, the commutator $$[D_{\mathrm{}}^2,H]=[D_{\mathrm{}}^2,H^{}]$$ (7) is invariant to matter effects in this basis. Taking the determinant on both sides of Eq. (7) yields the physical result, valid in any weak basis, that Jarlskog’s determinant of the commutator of the two lepton mass(-squared) matrices is invariant to matter effects, and, from Eq. (5) that: $$\mathrm{\Delta }_{12}\mathrm{\Delta }_{23}\mathrm{\Delta }_{31}J=\mathrm{\Delta }_{12}^{}\mathrm{\Delta }_{23}^{}\mathrm{\Delta }_{31}^{}J^{}$$ (8) (where the primed quantities refer to the matter modified Hamiltonian, as above). The invariance, Eq. (8), is a principal result of this paper (seeming not to have appeared previously in the literature). One consequence is that it enables the phenomenology of $`T`$ violation for neutrinos in matter to be stated in a new and transparent way. The $`T`$-violating asymmetries of Eq. (2) can now be generalised to neutrino propagation through matter of uniform density in the following form: $$P^{}(\nu _\alpha \nu _\beta )P^{}(\nu _\beta \nu _\alpha )=16J\frac{\mathrm{\Delta }_{12}\mathrm{\Delta }_{23}\mathrm{\Delta }_{31}}{\mathrm{\Delta }_{12}^{}\mathrm{\Delta }_{23}^{}\mathrm{\Delta }_{31}^{}}\mathrm{sin}(\mathrm{\Delta }_{12}^{}L/2)\mathrm{sin}(\mathrm{\Delta }_{23}^{}L/2)\mathrm{sin}(\mathrm{\Delta }_{31}^{}L/2)$$ (9) where Eq. (8) has been used to rewrite $`J^{}`$ in terms of its corresponding vacuum value, $`J`$, the vacuum masses, and the matter-modified masses, a form which is completely independent of the matter-modified mixing matrix elements themselves. This result is exact, and valid for arbitrary vacuum Hamiltonian, ie. for arbitrary neutrino mass and mixing parameters. Eq. (9) shows clearly that in general, both the magnitude, and the wavelength of $`T`$-violating asymmetries are modified in matter compared with their vacuum values, and it makes explicit the correlation between these two effects. The modification of the magnitude is by a factor: $$=\frac{J^{}}{J}=\frac{\mathrm{\Delta }_{12}\mathrm{\Delta }_{23}\mathrm{\Delta }_{31}}{\mathrm{\Delta }_{12}^{}\mathrm{\Delta }_{23}^{}\mathrm{\Delta }_{31}^{}},$$ (10) which, in general, can be larger or smaller than unity. Provided that the asymmetry wavelength is not increased by matter effects beyond observability, potentially useful enhancements of the magnitude of $`T`$ violation can occur in matter. In the form of Eq. (9), a number of results concerning $`T`$-violation for neutrinos propagating in matter can be seen rather easily. For example, it is manifest that even in matter, the $`T`$-violating asymmetries, as defined here, are independent of flavour. In the low-density limit, $`N_e0`$ ($`\mathrm{\Delta }_{ij}^{}\mathrm{\Delta }_{ij}`$ for all $`i,j`$), the expression, Eq. (9), reduces to the normal vacuum case, Eq. (2), as expected. Furthermore, in the small-$`L`$ limit (ie. if the propagation distance is small compared with the scale of the shortest matter oscillation length, $`L<<2/\mathrm{\Delta }_{13}^{}`$), the equation reduces to the vacuum case, Eq. (2), as all the sine functions in Eq. (9) may be approximated by their arguments and all the primed variables cancel. For $`L0`$ therefore, there is no observable effect due to matter on $`T`$-asymmetries (nor indeed on any observable aspect of the oscillation phenomenon ). In general, if $`J0`$ and there are no degeneracies in vacuum, Eq. (8) ensures that there are no degenerate eigenvalues in matter ($`\mathrm{\Delta }_{12}^{}\mathrm{\Delta }_{23}^{}\mathrm{\Delta }_{31}^{}0`$) and hence that $``$ remains finite. We note that for an arbitrary Hermitian matrix, $`H`$, the combination of its eigenvalues, $`\mathrm{\Delta }_{12}\mathrm{\Delta }_{23}\mathrm{\Delta }_{31}`$, is the square-root of the discriminant of its eigenvalue equation: $$\mathrm{\Delta }_{12}\mathrm{\Delta }_{23}\mathrm{\Delta }_{31}=\sqrt{4𝒮^3+𝒮^2𝒯^227𝒟^2+18𝒟𝒯𝒮4𝒟𝒯^3}$$ (11) where the invariants $`𝒯`$, $`𝒮`$ and $`𝒟`$ are respectively the trace, the sum of the principle minors and the determinant of $`H`$. This can be further simplified by noting that we have the freedom to add an arbitrary multiple of the unit matrix to $`H`$ without altering its discriminant, so that we can choose either $`𝒯=0`$ or one of the eigenvalues, $`\lambda _i=0`$. So, for any given model of the neutrino masses and mixing angles in vacuum, the magnitudes of the $`T`$-violating asymmetries in matter can, in principle, be calculated directly from the vacuum parameters of the model and the (simplified) invariants of the matter-modified Hamiltonian, $`H^{}`$, thereby obviating the need to diagonalise explicitly $`H^{}`$ to find the matter-modified mixing angles. In the general case, application of Eq. (11) to the matter-modified Hamiltonian to calculate the denominator of Eq. (10) yields the square-root of a quartic function of $`N_e`$, with coefficients which are functions of the vacuum mixing parameters (masses and mixing angles) and the neutrino energy. The presence of this quartic function means that $``$ has either one or two resonant maxima, as a function of the matter density. These correspond to the well-known matter resonances, occuring approximately where pairs of matter-modified neutrino masses are most nearly degenerate . Values of $``$ can be arbitrarily large, if $`J`$ is sufficienctly small (clearly, $`J^{}=RJ`$ cannot exceed $`1/6\sqrt{3}`$). For very large values of $`N_e`$, the quartic term dominates, and $`T`$ asymmetries are suppressed by $`1/N_e^2`$. In the remainder of this paper, we explore some specific examples which could have physical relevance. In order to ensure that our discussion is relevant to experiment, we will restrict our considerations to regions of parameter space which are not excluded by present, corroborated neutrino experiments, such as the Super Kamiokande atmospheric neutrino data and the Super Kamiokande and Gallium solar neutrino data . We will use for the lepton mixing matrix, $`U`$, a conventional form in which there are three real mixing angles and one complex phase such that: $`U_{e3}=\mathrm{sin}\theta _{13}`$, $`U_{e2}=\mathrm{sin}\theta _{12}\mathrm{cos}\theta _{13}`$, $`U_{\mu 3}=\mathrm{sin}\theta _{23}\mathrm{cos}\theta _{13}e^{i\delta }`$ and all the other elements are fixed by unitarity. A priori, our preferred scenario for neutrino oscillations is the threefold maximal, or tri-maximal scheme , which still provides a broadly consistent account of all the corroborated sets of experimental data on neutrino oscillations. In this scheme, all the elements of the lepton mixing matrix have equal moduli of magnitude $`1/\sqrt{3}`$, the vacuum value of $`J`$ takes its maximal value, $`1/(6\sqrt{3})`$, and all observed neutrino disappearance data are the result of neutrino oscillations governed by the scale of the larger neutrino mass-squared difference, $`\delta m_{13}^210^3`$ eV<sup>2</sup>, with $`\delta m_{12}^2`$ unresolved even by the solar data. This scheme is summarised by $`\theta _{12}=\theta _{23}=45^{}`$, $`\theta _{13}=\mathrm{sin}^1(1/\sqrt{3})`$ and $`\delta =90^{}`$. In this case, $``$ is always less than unity in matter, so that asymmetries are always suppressed. A viable, and even experimentally favoured, alternative to tri-maximal mixing effectively factorises the atmospheric and solar scales by setting $`U_{e3}0`$ in the mixing matrix . The atmospheric data then require $`\theta _{23}45^{}`$, and an energy-independent solar supression of $`1/2`$ is obtained by setting $`\theta _{12}=45^{}`$ in the original bi-maximal scheme . We have ourselves proposed (see also ) a particular variant of the bi-maximal scheme with $`U_{e2}=U_{\mu 2}=U_{\tau 2}=1/3`$ which simulates very effectively tri-maximal mixing with a solar supression of 5/9. In bi-maximal-type schemes, $`J_{CP}=0`$ and there are no fundamental $`CP`$\- or $`T`$-violating asymmetries involving neutrinos, in vacuum or in matter. There are still fake $`\nu /\overline{\nu }`$ asymmetries due to matter effects, but these are of little fundamental interest. For illustrative purposes, we have explored in this paper an ansatz which interpolates smoothly between tri-maximal and bi-maximal mixing: $$U=\left(\begin{array}{ccc}\frac{1}{\sqrt{2}}\mathrm{cos}\theta _{13}& \frac{1}{\sqrt{2}}\mathrm{cos}\theta _{13}& \mathrm{sin}\theta _{13}\\ \frac{1}{2}(1+\mathrm{sin}\theta _{13}e^{i\delta })& \frac{1}{2}(1\mathrm{sin}\theta _{13}e^{i\delta })& \frac{1}{\sqrt{2}}\mathrm{cos}\theta _{13}e^{i\delta }\\ \frac{1}{2}(1\mathrm{sin}\theta _{13}e^{i\delta })& \frac{1}{2}(1+\mathrm{sin}\theta _{13}e^{i\delta })& \frac{1}{\sqrt{2}}\mathrm{cos}\theta _{13}e^{i\delta }\end{array}\right)$$ (12) with $`\delta =90^{}`$. We have assumed that $`\delta m_{13}^210^3`$ eV<sup>2</sup> and have allowed $`\delta m_{12}^2`$ to remain variable. This scheme has the following properties: * $`\nu _\mu `$ and $`\nu _\tau `$ are treated democratically * $`\nu _1`$ and $`\nu _2`$ are treated democratically * in the limit $`\mathrm{sin}^2\theta _{13}1/3`$, we obtain tri-maximal mixing * in the limit $`\mathrm{sin}^2\theta _{13}0`$, we obtain bi-maximal mixing * $`J_{CP}=(1/4)\mathrm{sin}\theta _{13}\mathrm{cos}^2\theta _{13}`$ varies between its minimal and maximal values, as we move from bi-maximal to tri-maximal mixing. For arbitrary $`\mathrm{sin}\theta _{13}(0<\mathrm{sin}\theta _{13}<\frac{1}{\sqrt{3}})`$, there is always a resonance where $`T`$-violating asymmetries are maximised and this can be probed by choosing the neutrino energy, $`E`$, or the density of matter traversed, appropriately, as long as there is sufficient pathlength for the asymmetry to develop. The values of neutrino energy and/or matter density at resonance, and the maximum magnitude of the asymmetry there depend on $`\mathrm{sin}\theta _{13}`$ and the vacuum masses. Figs. 1a and 1b show some examples of the maximum magnitude of the $`T`$-violating asymmetry defined in Eq. (9), as a function of the matter density in units of $`N_A`$ cm<sup>-3</sup> for several values of $`\mathrm{sin}\theta _{13}`$ in this ansatz. The two figures differ in terms of the hierarchy of vacuum mass values used, and illustrate cases with one and with two resonant densities respectively. All the curves are for a fixed neutrino energy, $`E=1.5`$ GeV, which turns out to be the most suitable energy scale to maximise these effects in terrestrial matter within this model (see Eq. (14)). The Earth’s mantle has an electron density of approximately $`2N_A`$ cm<sup>-3</sup> and the core, roughly $`6N_A`$ cm<sup>-3</sup>. The negative density parts of the curves correspond to anti-neutrinos propagating in normal matter with electron density $`|N_e|`$. We note that enhancements for neutrinos, for example, are typically compensated by suppressions for anti-neutrinos, and/or for neutrinos at different energies. As shown in Ref. , the electron neutrino begins to decouple completely in the high energy limit, and $`T`$ asymmetries are suppressed accordingly. The magnitudes of the asymmetries in vacuum can be read from the curves at the zero density point. Several other features are typical of the generic MSW-like matter effect, eg., the resonance gets sharper and the asymmetry maximum gets closer to unity as the vacuum mixing angle decreases. It is interesting to consider under what circumstances the asymmetry becomes maximal in this ansatz. Even in the limit $`\mathrm{sin}\theta _{13}0`$ ($`\mathrm{sin}\theta _{13}0`$), where $`T`$\- and $`CP`$-violating asymmetries in vacuum become arbitrarily small, they can reach their maximum possible value of unity in matter. We find that this can be achieved if $$\delta m_{12}^2=2\mathrm{cos}\theta _{13}\frac{\sqrt{2}\mathrm{cos}\theta _{13}}{1+\mathrm{sin}^2\theta _{13}}\delta m_{13}^2$$ (13) and then the matter density and neutrino energy at resonance are related by: $$\sqrt{2}GN_e=\frac{1}{\sqrt{2}}\frac{(13\mathrm{sin}^2\theta _{13})(\sqrt{2}\mathrm{cos}\theta _{13})}{1+\mathrm{sin}^2\theta _{13}}\delta m_{13}^2/2E.$$ (14) Under the above two conditions, threefold maximal mixing is achieved at resonance. Eq. (13) implies that for such mixing at resonance, the two $`\mathrm{\Delta }m^2`$ values must be of the same order of magnitude, which is probably not ruled out by the present data. Fig. 2 shows an example of how one of the asymmetries in Fig. 1 develops with propagation distance, in units of 1000 km. It is plotted for neutrinos in matter with a constant density of $`1.9N_A`$ cm<sup>-3</sup>, ie. close to the density of the Earth’s mantle. The matter-enhanced asymmetry is compared with the vacuum asymmetry and the corresponding matter-enhanced asymmetry for anti-neutrinos. The enhanced asymmetry does not develop to the 100% level within the distance scale of the Earth’s diameter in this case, (it would, given a longer pathlength), although there are cases where it does. It does however exceed considerably the vacuum asymmetry, along almost the whole trajectory within the Earth, and may be observable when the vacuum asymmetry would not be. While Eq. (13) may not be satisfied exactly in nature, there is a significant range of parameter space over which large enhancements are possible. We have found the conditions under which the enhancements are maximal, and here, at least for some values of the parameters, matter can induce a situation where the mixing matrix is arbitrarily close to the tri-maximal form. Such a scenario might well be obtained in nature, and we recommend that large $`T`$\- and $`CP`$-violating asymmetries be searched for in future long-baseline neutrino experiments. It may even be possible to site experiments to exploit the matter effects in the Earth so as to maximise the asymmetries. Figure Captions Figure 1. Examples of resonant enhancement and suppression of $`T`$ violation for neutrino oscillations in matter using the ansatz of Eq. (12). The maximum magnitude of the $`T`$-violating asymmetry is plotted as a function of matter density and $`\mathrm{sin}\theta _{13}`$, for neutrino energy, $`E=1.5`$ GeV, with a). $`\delta m_{13}^2=1.0\times 10^3`$ eV<sup>2</sup> and $`\delta m_{12}^2=0.7\times 10^3`$ eV<sup>2</sup>; b). $`\delta m_{13}^2=1.0\times 10^3`$ eV<sup>2</sup> and $`\delta m_{12}^2=0.2\times 10^3`$ eV<sup>2</sup>. In each case, the point marked “o” represents the vacuum value of the asymmetry for the same vacuum mass parameters. The Earth’s mantle has an electron density of approximately $`2N_A`$ cm<sup>-3</sup> and the core, roughly $`6N_A`$ cm<sup>-3</sup>. NB. The density scale can be converted to a scale of energy in GeV for neutrinos propagating in the Earth’s mantle by multiplying the numbers by $`0.75`$. Figure 2. An example of resonant enhancement of $`T`$ violation for neutrino oscillations in matter. The solid curve shows the $`T`$-violating asymmetry as a function of propagation distance, for neutrinos of energy, $`E=1.5`$ GeV, with $`\delta m_{13}^2=1.0\times 10^3`$ eV<sup>2</sup>, $`\delta m_{12}^2=0.7\times 10^3`$ eV<sup>2</sup> and $`\mathrm{sin}\theta _{13}=0.1`$. The matter electron density in this example is $`1.9N_A`$ cm<sup>-3</sup>. The dashed line shows the same quantity in vacuum for the same vacuum parameters, and the dotted line shows the same quantity for anti-neutrinos.
no-problem/9912/astro-ph9912495.html
ar5iv
text
# The Internal Dynamics of Globular Clusters ## 1 Introduction There are about 150 globulars orbiting in the halo of our Galaxy. They look like huge swarms of stars, characterized by symmetry and apparent smoothness. Fig. 1 below displays an image of NGC 5139 $``$ $`\omega `$ Centauri, the brightest and most massive galactic globular cluster. This 40 by 40 image from the Digital Sky Survey does not reach, in spite of its rather large angular size, the outer parts of the cluster. With its tidal radius of about 40-50, the apparent diameter of $`\omega `$ Centauri on the plane of the sky is significantly larger than the apparent 30 diameter of the full moon. Globular clusters are old stellar systems, made of one single generation of stars. Although still somewhat uncertain, their individual ages range between about 10 and 15 Gyr, with possible significant differences, up to a few gigayears, from one cluster to the other. Other properties of globular clusters exhibit significant variations: e.g., their integrated absolute magnitudes range from $`M_V^{int}`$ = –1.7 to –10.1 mag; their total masses from $`M_{tot}`$ = $`10^3`$ to 5 $`\times `$ $`10^6M_{}`$; their galactocentric distances from 2 to 120 kpc. ## 2 A few dynamical time scales The dynamics of any stellar system may be characterized by the following three dynamical time scales: (i) the crossing time $`t_{cr}`$, which is the time needed by a star to move across the system; (ii) the relaxation time $`t_{rlx}`$, which is the time needed by the stellar encounters to redistribute energies, setting up a near-maxwellian velocity distribution; (iii) the evolution time $`t_{ev}`$, which is the time during which energy-changing mechanisms operate, stars escape, while the size and profile of the system change. In the case of globular clusters, $`t_{cr}`$ $``$ $`10^6`$yr, $`t_{rlx}`$ $``$ 100 $`10^6`$yr, and $`t_{ev}`$ $``$ 10 $`10^9`$yr. It is worth mentioning that several (different and precise) definitions exist for the relaxation time. The most commonly used is the half-mass relaxation time $`t_{rh}`$ of Spitzer (1987, Eq. 2-62), where the values for the mass-weighted mean square velocity of the stars and the mass density are those evaluated at the half-mass radius of the system (see Meylan & Heggie 1997 for a review). It has been suggested that the combination of relaxation with the chaotic nature of stellar orbits in non-integrable potentials (e.g., most axisymmetric potentials) causes a great enhancement in the rate of relaxation (Pfenniger 1986, Kandrup & Willmes 1994). Another suggestion which, if confirmed, would revolutionise the theory of relaxation was made by Gurzadyan & Savvidy (1984, 1986) who proposed a much faster relaxation time scale than in standard theory, by a factor of order $`N^{2/3}`$. There is some support for this view on observational grounds (Vesperini 1992a,b). From size, luminosity, and mass points of view, globular clusters are bracketed by open clusters on the lower side and dwarf elliptical galaxies on the upper side. Table 1 displays, for open clusters, globular clusters, and galaxies, some interesting relations between the above three time scales. For open clusters, crossing time $`t_{cr}`$ and relaxation time $`t_{rlx}`$ are more or less equivalent, both being significantly smaller than the evolution time $`t_{ev}`$. This means that most open clusters dissolve within a few gigayears. For galaxies, relaxation time $`t_{rlx}`$ and evolution time $`t_{ev}`$ are more or less equivalent, both being significantly larger than the crossing time $`t_{cr}`$. This means that galaxies are not relaxed, i.e., not dynamically evolved. It is only for globular clusters that all three time scales are significantly different, implying plenty of time for a significant dynamical evolution in these stellar systems, although avoiding quick evaporation. Consequently, globular clusters represent an interesting class of dynamical stellar systems in which some dynamical processes take place on time scales shorter than their age, i.e., shorter than the Hubble time, providing us with unique dynamical laboratories for learning about two-body relaxation, mass segregation from equipartition of energy, stellar collisions, stellar mergers, and core collapse. All these dynamical phenomena are related to the internal dynamical evolution only, and would also happen in isolated globular clusters. The external dynamical disturbances — tidal stripping by the galactic gravitational field — influence equally strongly the dynamical evolution of globular clusters. ## 3 Model building for globular clusters Already before the pioneering work of von Hoerner (1960), who made the first $`N`$-body calculations with $`N`$ = 16, it was realized that computation of individual stellar motions could be replaced by statistical methods. Some parallels were drawn between a molecular gas and star clusters: the stars were considered as mass points representing the molecules in a collisionless gas. The analogy between a gas of molecules and a gas of stars is subject to criticisms, since the mean free path of a molecule is generally quite small compared with the size or scale height of the system, whereas the mean free path of a star is much larger then the diameter of the cluster; in addition molecules travel along straight lines, while stars move along orbits in the gravitational potential of all the other stars of the stellar system. Stellar collisions in clusters were studied by Jeans (1913), who remarked that they might be important in such stellar systems. The problem was then to seek the possible spherical distribution of such a gas in a steady state. ### 3.1 Boltzmann’s equation The commonest way of defining a model of a star cluster is in terms of its distribution function $`f(𝐫,𝐯,m)`$, which is defined by the statement that $`fd^3𝐫d^3𝐯dm`$ is the mean number of stars with positions in a small box $`d^3𝐫`$ in space, velocities in a small box $`d^3𝐯`$ and masses in an interval $`dm`$. In terms of this description a fairly general equation for the dynamical evolution is Boltzmann’s equation, $$\frac{f}{t}+𝐯._𝐫f_𝐫\mathrm{\Phi }._𝐯f=\frac{f}{t}_{enc},$$ $`(1)`$ where $`\mathrm{\Phi }`$ is the smoothed gravitational potential per unit mass, and the right-hand side describes the effect of two-body encounters. The distribution $`f`$ is a function of 7 variables if we take into account time. This is rather more than can be handle. But it is possible to reduce the complexity posed by Boltzmann’s equation by taking moments. By taking moments of the Boltzmann’s equation with respect to velocities we obtain, for n = 0 and 1, the Jeans equations which are expressions describing the rotation and the velocity dispersion: $$Boltzmannv_j^nd^3𝐯=JeansEqu.$$ $`(2)`$ By taking moments of the Jeans equations with respect to positions, we obtain the Tensor Virial equations which are expressions relating the global kinematics to the morphology of the system, e.g., the ratio $`v_{}/\sigma _{}`$ of ordered to random motions: $$Jeansx_j^nd^3𝐱=TensorVirial$$ $`(3)`$ In these ways, we obtain information about the general properties of solutions of Boltzmann’s equation without recovering any solutions. ### 3.2 Liouville’s equation and Jean’s theorem The general Boltzmann’s equation can be greatly simplified in other ways. Because $`t_{cr}`$ is so short, after a few orbits the stars are mixed into a nearly stationary distribution, and so the term $`f/t`$ is practically equal to zero. In a similar way, because $`t_{rh}`$ is so long, the collision term $`(f/t)_{enc}`$ can be ignored. What is left, i.e., $$𝐯._𝐫f_𝐫\mathrm{\Phi }._𝐯f=0,$$ $`(4)`$ is an equilibrium form of what is frequently called Liouville’s equation, or the collisionless Boltzmann’s equation, or the Vlasov equation. In simple cases, the general solution of Equ. 4 is given by Jeans’ theorem, which states that $`f`$ must be a function of the constants of the equations of motion of a star, e.g., of the stellar energy per unit mass $`\epsilon =v^2/2+\mathrm{\Phi }`$. Such quantities are also called integrals of the motion. If not all integrals of the motion are known, such functions are still solutions, though not the most general. For a self-consistent solution, the distribution function $`f`$ must correspond to the density $`\rho `$ required to provide the cluster potential $`\mathrm{\Phi }_c`$, i.e.: $$^2\mathrm{\Phi }_c=4\pi G\rho =4\pi Gmfd^3𝐫d^3𝐯𝑑m.$$ $`(5)`$ Many different kinds of models may be constructed with this approach. In the first place there is considerable freedom of choice over which integrals of the motion to include. In the second place one is free to choose the functional dependence of these integrals, i.e., the analytic form of the distribution function (see, e.g., Binney 1982, and Binney & Tremaine 1987). King (1966) provided the first grid of models (with different concentrations $`c`$ = log ($`r_t/r_c`$) where $`r_t`$ and $`r_c`$ are the tidal and core radii, respectively) that incorporate the three most important elements governing globular cluster structure: dynamical equilibrium, two-body relaxation, and tidal truncation. These models depend on one integral of the motion only — the stellar energy per unit mass $`\epsilon `$ — and the functional dependence is based on the lowered maxwellian (see Equ. 6 below). Such models are spherical and their velocity dispersion tensor is everywhere isotropic. Models more complicated have been built since then. Da Costa & Freeman (1976) generalized the simple single-mass King models to produce more realistic multi-mass models with full equipartition of energy in the centre. Gunn & Griffin (1979) developed multi-mass models whose distribution functions depend on the stellar energy per unit mass $`\epsilon `$ and the specific angular momentum $`l`$. Such models are spherical and have a radial anisotropic velocity dispersion ($`\overline{v_r^2}`$ $``$ $`\overline{v_\theta ^2}`$ = $`\overline{v_\varphi ^2}`$). Called King-Michie models, they associate the lowered maxwellian of the King model with the anisotropy factor of the Eddington models: $$f(\epsilon ,l)(\mathrm{exp}(2j^2\epsilon )\mathrm{exp}(2j^2\epsilon _t))\mathrm{exp}(j^2l^2/r_a^2)$$ $`(6)`$ Lupton & Gunn (1987) developed multi-mass models whose distribution functions depend on a third integral of motion $`I_3`$, in addition to the stellar energy per unit mass $`\epsilon `$ and the component of angular momentum parallel to the rotation axis $`l_z`$. Although no general analytical form for a third integral is available, the existence of an analytic third integral of motion $`I_3`$ in special cases has been known for decades, since the work by Jeans (1915). Because the rotation creates a non-spherical potential, $`I_3`$ = $`l^2`$ is in fact only an approximate integral and Lupton & Gunn’s distribution function does not obey the collisionless Boltzmann’s equation for equilibrium (Eq. 4). These were notable landmarks in these developments, among many others. Table 2 hereafter, from Meylan & Heggie (1997), list for the static models (King, King-Michie, 3-Integral) and for the evolutionary models (gas, Fokker-Planck, N-Body) the dynamical features and dynamical processes they take into account. Under the heading Dynamical Process, the second column in Table 2 states what kind of physical process it is that is named in the first column. ## 4 Parametric and non-parametric approaches The method in the above section for analyzing globular cluster data is a model-building, or parametric, approach. One begins by postulating a functional form for the distribution function $`f`$ and the gravitational potential $`\mathrm{\Phi }`$; often the two are linked via Poisson’s equation, i.e. the stars described by $`f`$ are assumed to contain all of the mass that contributes to $`\mathrm{\Phi }`$. This $`f`$ is then projected into observable space and its predictions compared with the data. If the discrepancies are significant, the model is rejected and another one is tried. If no combination of functions $`\{f,\mathrm{\Phi }\}`$ from the adopted family can be found that reproduces the data, one typically adds extra degrees of freedom until the fit is satisfactory. For instance, $`f`$ may be allowed to depend on a larger number of integrals of the motion (Lupton & Gunn 1987) or the range of possible potentials may be increased by postulating additional populations of unseen stars (Da Costa & Freeman 1976). This approach has enjoyed considerable popularity, in part because it is computationally straightforward but also because, as King (1981) has emphasized, globular cluster data are generally well fitted by these standard models. But one never knows which of the assumptions underlying the models are adhered to by the real system and which are not. For instance, a deviation between the surface density profile of a globular cluster and the profile predicted by an isotropic model is sometimes taken as evidence that the real cluster is anisotropic. But it is equally possible that the adopted form for $`f(\epsilon )`$ is simply in error, since by adjusting the dependence of $`f`$ on $`\epsilon `$ one can reproduce any density profile without anisotropy. Even including the additional constraint of a measured velocity dispersion profile does not greatly improve matters since it is always possible to trade off the mass distribution with the velocity anisotropy in such a way as to leave the observed dispersions unchanged (Dejonghe & Merritt 1992). Conclusions drawn from the model-building studies are hence very difficult to interpret; they are valid only to the extent that the assumed functional forms for $`f`$ and $`\mathrm{\Phi }`$ are correct. These arguments suggest that it might be profitable to interpret kinematical data from globular clusters in an entirely different manner, placing much stronger demands on the data and making fewer ad hoc assumptions about $`f`$ and $`\mathrm{\Phi }`$. Ideally, the unknown functions should be generated non-parametrically from the data. Such an approach pioneered by Merritt (see, e.g., Merritt 1993a,b, 1996) has rarely been tried in the past because of the inherent instability of the deprojection process. We provide here after the results of two studies (parametric and non-parametric, respectively) of the globular cluster $`\omega `$ Centauri, both studies using exactly the same observational data (surface brightness profile and stellar radial velocities). ### 4.1 Parametric approach applied to $`\omega `$ Centauri The mean radial velocities obtained with CORAVEL (Mayor et al. 1997) for 469 individual stars located in the galactic globular cluster $`\omega `$ Centauri provide the velocity dispersion profile. It increases significantly from the outer parts inwards: the 16 outermost stars, located between 19.2 and 22.4 from the center, have a velocity dispersion $`\sigma `$ = 5.1 $`\pm `$ 1.6 km s<sup>-1</sup>, while the 16 innermost stars, located within 1 from the center, have a velocity dispersion $`\sigma `$ = 21.9 $`\pm `$ 3.9 km s<sup>-1</sup>. This inner value of about $`\sigma _{}`$ = 22 km s<sup>-1</sup> is the largest velocity dispersion value obtained in the core of any galactic globular cluster (Meylan et al. 1995). A simultaneous fit of these radial velocities and of the surface brightness profile to a multi-mass King-Michie dynamical model provides mean estimates of the total mass equal to $`M_{tot}`$ = 5.1 $`10^6M_{}`$, with a corresponding mean mass-to-light ratio $`M/L_V`$ = 4.1. The present results emphasize the fact that $`\omega `$ Centauri is not only the brightest but also, by far, the most massive galactic globular cluster (Meylan et al. 1995). The fact that only models with strong anisotropy of the velocity dispersion ($`r_a`$ = 2-3 $`r_c`$) agree with the observations does not give a definitive proof of the presence of such anisotropy because of fundamental indetermination in the comparison between King-Michie models and observations. A strong anisotropy is nevertheless expected outside of the core of $`\omega `$ Centauri, given the large value of the half-mass relaxation time of about 26 $``$ $`t_{rh}`$$``$ 46 $`10^9`$yr) (Meylan et al. 1995). The reliability of the present application of King-Michie models might be questionable on a few fundamental points. In addition to the arbitrary choices of the two integrals of the motion and of the functional dependence of the distribution function on these two integrals, there is also the assumption of thermal equilibrium among the different mass classes in the central parts of the cluster. From a theoretical point of view, mass segregation has been one of the early important results to emanate from small N-body simulations. Since then, large N-body simulations and models integrating the Fokker-Planck equation for many thousands of stars have fully confirmed the presence of equipartition. Thanks to the high angular resolution of the Hubble Space Telescope (HST) cameras (FOC and WFPC2), mass segregation has now been observed in the core of a few galactic globular clusters (see, e.g., Anderson 1997, 1999). In the case of 47 Tucanae, the observed luminosity function by Anderson (1997, 1999) is in close agreement with equipartition-assuming King-Michie models and fails to fit the no-segregation models. This dichotomy is not as clear in the case of $`\omega `$ Centauri, probably because of its rather long central relaxation time. The problem about mass segregation does not concern its existence — it is happening —, but rather its quantitative evolution. Can there be an end to mass segregation, i.e., does the system ever reach a stable thermal equilibrium ? Underlying is the problem of core collapse (see Spitzer 1969, Chernoff & Weinberg 1990), which is briefly described in § 7 below. ### 4.2 Non-parametric approach applied to $`\omega `$ Centauri The stellar dynamics of $`\omega `$ Centauri is inferred from the same radial velocities of 469 stars used in § 4.1 (Mayor et al. 1997). By assuming that the residual velocities are isotropic in the meridional plane, $`\sigma _\varpi =\sigma _z\sigma `$, Merritt et al. (1997) derived the dependence of the two independent velocity dispersions $`\sigma `$ and $`\sigma _\varphi `$ on various positions in the meridional plane. The central velocity dispersion parallel to the meridional plane is $`\sigma _{}`$ = $`17_{2.6}^{+2.1}`$ km s<sup>-1</sup>. With this approach, there is no evidence for significant anisotropy anywhere in $`\omega `$ Centauri. Thus, this cluster can reasonably be described as an isotropic oblate rotator (Merritt et al. 1997). The binned surface brightness measurements from Meylan (1986) are plotted in Fig. 2a, where the solid line in is an estimate of the surface brightness profile $`\mathrm{\Sigma }(R)`$, as the solution to the optimization problem. The estimate of space density profile $`\nu (r)`$ may be defined as the Abel inversion of the estimate $`\mathrm{\Sigma }(R)`$: $$\nu (r)=\frac{1}{\pi }_r^{\mathrm{}}\frac{d\mathrm{\Sigma }}{dR}\frac{dR}{\sqrt{R^2r^2}}.$$ $`(7)`$ The dashed lines in Fig. 2b are 95% confidence bands on the estimate of $`\nu (r)`$. Here $`r`$ is an azimuthally-averaged mean radius. Both profiles are normalized to unit total number. This profile actually has a power-law cusp, $`\nu r^1`$, inside of 0.5; however the confidence bands are consistent with a wide range of slopes in this region, including even a profile that declines toward the center. The gravitational potential and mass distribution in $`\omega `$ Centauri are consistent with the predictions of a model in which the mass is distributed in the same way as the bright stars, although the cluster is assumed to be oblate and edge-on but mass is not assumed to follow light. The central mass density is $`2110_{510}^{+530}M_{}\mathrm{pc}^3`$. However this result may be strongly dependent on the assumption that the velocity ellipsoid is isotropic in the meridional plane. This central mass density determination is in full agreement with the values deduced from King-Michie models by Meylan et al. (1995). There is no significant evidence for a difference between the velocity dispersions parallel and perpendicular to the meridional plane. The mass distribution inferred from the kinematics is slightly more extended than, though not strongly inconsistent with, the luminosity distribution. The derived two-integral distribution function $`f(\epsilon ,l_z)`$ for the stars in $`\omega `$ Centauri is fully consistent with the available data. Large amount of kinematical data (radial velocities and proper motions for a few thousand stars) will soon allow the efficient use of the non-parametric approach in the case of the largest two galactic globular clusters, viz. $`\omega `$ Centauri and 47 Tucanae (Freeman et al., Meylan et al., both in preparation). ## 5 Systemic rotation of $`\omega `$ Centauri Systemic rotation in globular clusters has been expected for a long time, especially in $`\omega `$ Centauri, because of its significant flattening. The first clear evidence of such rotation was observed, in this cluster and in 47 Tucanae, by Meylan & Mayor (1986). More recently, rather than fitting the data to a family of models, estimate of the rotation was obtain non-parametrically, by direct operation on the data by Merritt et al. (1997). Fig. 3 displays the contours of constant $`\overline{v}_\varphi `$ which are remarkably similar in shape to those of the parametric model postulated by Meylan & Mayor (1986), at least in the region near the center where the solution is strongly constrained by the data. The rotational velocity field is clearly not cylindrical; instead, $`\overline{v}_\varphi `$ has a peak value of 8 km s<sup>-1</sup> at about 7 from the center in the equatorial plane, and falls off both with increasing $`\varpi `$ and $`z`$. In the region inside the peak, the rotation is approximately solid-body; at large radii, the available data do not strongly constrain the form of the rotational velocity field. The mean motions are consistent with axisymmetry , once a correction is made for perspective rotation resulting from the cluster proper motion. The above inferred rotational velocity field in $`\omega `$ Centauri agree remarkably with the predictions of the theoretical model by Einsel & Spurzem (1999), who have investigated the influence of rotation on the dynamical evolution of collisional stellar systems by solving the orbit-averaged Fokker-Planck equation in $`(\epsilon ,l_z)`$-space. However it is not clear that any relevant comparison exists: because of the long relaxation time in $`\omega `$ Centauri, the rotation probably still reflects to a large extent the state of the cluster shortly after its formation. The observed estimate of $`\overline{v}_\varphi (\varpi ,z)`$ might therefore be most useful as a constraint on cluster formation models. But things may be even more complicated ! Using their calcium abundances for about 400 stars with radial velocities by Mayor et al. (1997), Norris et al. (1997) found that the 20% metal-rich tail of the \[Ca/H\] distribution is not only more centrally concentrated, but is also kinematically cooler than the 80% metal-poor component. While the metal-poorer component exhibits well-defined systemic rotation, the metal-richer one shows no evidence of it, in contradistinction to the simple dissipative enrichment scenario of cluster formation. ## 6 Rotation vs. velocity dispersion All results about rotation depend on the value of the angle $`i`$ between the plane of the sky and the axis of symmetry of the cluster. This angle remains unknown. Since the two best studied clusters, viz. $`\omega `$ Centauri and 47 Tucanae, belong to the small group of clusters which, among the 150 galactic globular clusters, are the flattest ones, we can expect, from a statistical point of view, that their angles $`i`$ should not be very different from 0 $``$ $`i`$ $``$ 30, the clusters being seen nearly edge-on. The importance of rotation (namely, of its projection along the line of sight) increases as $`i`$ gets closer to 0. The relative importance of rotational to random motions is given by the ratio $`v_{}/\sigma _{}`$, where $`v_{}^2`$ is the mass-weighted mean square rotation velocity and $`\sigma _{}^2`$ is the mass-weighted mean square random velocity. For $`i`$ = 90 and 60, in $`\omega `$ Centauri the ratio $`v_{}/\sigma _{}`$ = 0.35 and 0.39 and in 47 Tucanae the ratio $`v_{}/\sigma _{}`$ = 0.40 and 0.46, respectively (Meylan & Mayor 1986). Even with $`i`$ = 45, the dynamical importance of rotation remains weak compared to random motions. The ratio of rotational to random kinetic energies is $``$ 0.1, confirming the fact that globular clusters are, above all, hot stellar systems. Rotation has been directly observed and measured in twelve globular clusters (see Table 7.2 in Meylan & Heggie 1997). The diagram ($`v_{}/\sigma _{}`$ vs. $`\epsilon `$), of the ratio of ordered $`v_{}`$ to random $`\sigma _{}`$ motions as a function of the ellipticity $`\epsilon `$, has been frequently used for elliptical galaxies and its meaning is extensively discussed in Binney & Tremaine (1987 Chapter 4.3). The low luminosity ($`L`$ $`<`$ $``$ 2.5 10<sup>10</sup> $`L_{}`$) elliptical galaxies and spheroids have ($`v_{}/\sigma _{}`$,$`\epsilon `$) values which are scattered along the relation for oblate systems with isotropic velocity-dispersion tensors, while the high luminosity ($`L`$ $`>`$ $``$ 2.5 10<sup>10</sup> $`L_{}`$) elliptical galaxies have ($`v_{}/\sigma _{}`$,$`\epsilon `$) values which are scattered below the above relation, indicating the presence of anisotropic velocity-dispersion tensors. Given their small mean ellipticities (0.00 $``$ $`\epsilon `$ $``$ 0.12), globular clusters are located in the lower-left corner of the ($`v_{}/\sigma _{}`$ vs. $`\epsilon `$) diagram, an area characterized by isotropy or mild anisotropy of the velocity-dispersion tensor. ## 7 Overwhole dynamical evolution towards core collapse Till the late ninety seventies, globular clusters were thought to be relatively static stellar systems since most surface-brightness profiles of globular clusters were successfully fitted by equilibrium models. Nevertheless, it had been already known, since the early sixties, that globular clusters had to evolve dynamically, even when considering only relaxation, which causes stars to escape, consequently cluster cores to contract and envelopes to expand. But dynamical evolution of globular clusters was not yet a field of research by itself, since the very few theoretical investigations had led to a most puzzling paradox: core collapse (Hénon 1961, Lynden-Bell & Wood 1968). It was only in the early eighties that the field grew dramatically. On the theoretical side, the development of high-speed computers allowed numerical simulations of dynamical evolution. Nowadays, Fokker-Planck and conducting-gas-sphere evolutionary models have been computed well into core collapse and beyond, leading to the discovery of possible post-collapse oscillations. In a similar way, hardware and software improvements of N-body codes provide very interesting first results for 10<sup>4</sup>-body simulations (Makino 1996a,b, Spurzem & Aarseth 1996, Portegies Zwart et al. 1999), and give the first genuine hope, in a few years, for 10<sup>5</sup>-body simulations. On the observational side, the manufacture of low-readout-noise Charge Coupled Devices (CCDs), combined since 1990 with the high spatial resolution of the Hubble Space Telescope (HST), allow long integrations on faint astronomical targets in crowded fields, and provide improved data analyzed with sophisticated software packages. ## 8 Gravothermal instability, gravothermal oscillations For many years (between about 1940 and 1960) secular evolution of globular cluster was understood in terms of the evaporative model of Ambartsumian (1938) and Spitzer (1940). In this model it is assumed that two-body relaxation attempts to set up a maxwellian distribution of velocities on the time scale of a relaxation time, but that stars with velocities above the escape velocity promptly escape. The next major step in understanding came when it was discovered that evolution arises also when stars escape from the inner parts of the cluster to larger radii, without necessarily escaping altogether. Antonov (1962) realised that these internal readjustments need not lead to a structure in thermal equilibrium, because thermal equilibrium may be unstable in self-gravitating systems (see Lynden-Bell & Wood 1968). The well known process of core collapse is interpreted as a manifestation of the gravothermal instability. Core collapse has been first observed and studied in simulations using gas and Fokker-Planck models. For an isolated cluster (without a tidal field) the time scale for the entire evolution of the core (when the density has formally become infinite) is about 15.7 $`t_{rh}`$(0), when expressed in terms of the initial half-mass relaxation time (Cohn 1980). This result is for an isotropic code starting from a Plummer model with stars of equal mass, while for an anisotropic code the time extends to 17.6 $`t_{rh}(0)`$ (Takahashi 1995). The collapse time is generally shorter in the presence of unequal masses (Inagaki & Wiyanto 1984, Chernoff & Weinberg 1990). Murphy & Cohn (1988) give surface brightness and velocity dispersion profiles at various times during collapse, for a system with a reasonably realistic present-day mass spectrum. Addition of effects of stellar evolution, modeled as instantaneous mass loss at the end of main sequence evolution, delays the onset of core collapse (Angeletti & Giannone 1980, Applegate 1986, Chernoff & Weinberg 1990, Kim et al. 1992). The effect of a galactic time-dependent tidal field can be to accelerate core collapse (Spitzer & Chevalier 1973). Examples of $`N`$-body models which illustrate various aspects of core collapse include Aarseth (1988), where $`N=1,000`$, Giersz & Heggie (1993) ($`N2,000`$), Spurzem & Aarseth (1996) ($`N=10,000`$), and Makino (1996a,b; see Fig. 4 hereafter) ($`N32,000`$). At one time, it was not at all certain that a cluster could survive beyond the end of core collapse, with as a singularity characterized by infinite central density. Thus, many experts doubted whether the study of post-collapse clusters had any relevance to the interpretation of observations. Hénon (1961, 1965) showed that a cluster without such a singularity would evolve into one that did, and he realised that, in a real system, a flux of energy might well be supplied by the formation and evolution of binary stars, governing a series of collapses and expansions of the cluster core. Numerical simulations using gas and Fokker-Planck models show that systems with at least a few thousand stars (Goodman 1987, Heggie & Ramamani 1989, Breeden et al. 1994) follow a complicated succession of collapses and expansions, called gravothermal oscillations by their discoverers (Sugimoto & Bettwieser 1983, Bettwieser & Sugimoto 1984). Quite apart from their relevance in nature, these oscillations are interesting in their own right, as an example of chaotic dynamics. From this point of view they have been studied by Allen & Heggie (1992), Breeden & Packard (1994), and Breeden & Cohn (1995). In 1995 the genuine occurrence of gravothermal oscillations in $`N`$-body systems was spectacularly demonstrated by Makino (1996a,b). These results confirm that the nature of post-collapse evolution in $`N`$-body systems is far more stochastic than in the simplified continuum models on which so much of our understanding rests at present. ## 9 Observational evidence of core collapse In the eighties, CCD observations allowed a systematic investigation of the inner surface brightness profiles (within $``$ 3) of 127 galactic globular clusters (Djorgovski & King 1986, Chernoff & Djorgovski 1989, Trager et al. 1995). These authors sorted the globular clusters into two different classes: (i) the King model clusters, whose surface brightness profiles resemble a single-component King model with a flat isothermal core and a steep envelope, and (ii) the collapsed-core clusters, whose surface brightness profiles follow an almost pure power law with an exponent of about –1. In the Galaxy, about 20% of the globular clusters belong to the second type, exhibiting in their inner regions apparent departures from King-model profiles. Consequently, they are considered to have collapsed cores. The globular cluster M15 has long been considered as a prototype of the collapsed-core star clusters. High-resolution imaging of the centre of M15 has resolved the luminosity cusp into essentially three bright stars. Post-refurbishment HST star-count data confirm that the 2.2<sup>′′</sup> core radius observed by Lauer et al. (1991), and questioned by Yanny et al. (1994), is observed neither by Guhathakurta et al. (1996) with WFPC2 data nor by Sosin & King (1996) with FOC data. This surface-density profile clearly continues to climb steadily within 2<sup>′′</sup>. A maximum-likelihood method rules out a 2<sup>′′</sup> core at the 95% confidence level. It is not possible to distinguish at present between a pure power-law profile and a very small core (Sosin & King 1996). Consequently, among the galactic globular clusters, M15 displays one of the best cases of clusters caught in a state of deep core collapse. ## 10 Tidal tails from wide-field imaging ### 10.1 Tidal truncation In addition to the effects of their internal dynamical evolution, globular clusters suffer strong dynamical evolution from the potential well of their host galaxy (Gnedin & Ostriker 1997, Murali & Weinberg 1997). These external forces speed up the internal dynamical evolution of these stellar systems, accelerating their destruction. Shocks are caused by the tidal field of the galaxy: interactions with the disk, the bulge and, somehow, with the giant molecular clouds, heat up the outer regions of each star cluster. The stars in the halo are stripped by the tidal field. All globular clusters are expected to have already lost an important fraction of their mass, deposited in the form of individual stars in the halo of the Galaxy (see Meylan & Heggie 1997 for a review). Recent N-body simulations of globular clusters embedded in a realistic galactic potential (Oh & Lin 1992; Johnston et al. 1999) were performed in order to study the amount of mass loss for different kinds of orbits and different kinds of clusters, along with the dynamics and the mass segregation in tidal tails. Grillmair et al. (1995) in an observational analysis of star counts in the outer parts of a few galactic globular clusters found extra-cluster overdensities that they associated partly with stars stripped into the Galaxy field. ### 10.2 Tidal tails from wide-field observations Leon, Meylan & Combes (2000) studied the 2-D structures of the tidal tails associated with 20 galactic globular clusters, obtained by using the wavelet transform to detect weak structures at large scale and filter the strong background noise for the low galactic latitude clusters. They also present N-body simulations of globular clusters in orbits around the Galaxy, in order to study quantitatively and geometrically the tidal effects they encounter (Combes, Leon & Meylan 2000). Their sample clusters share different properties or locations in the Galaxy, with various masses and structural parameters. It is of course necessary to have very wide field imaging observations. Consequently, they obtained, during the years 1996 and 1997, photographic films with the ESO Schmidt telescope. The field of view is 5.5 $`\times `$ 5.5 with a scale of 67.5<sup>′′</sup>/mm. The filters used, viz. BG12 and RG630, correspond to $`B`$ and $`R`$, respectively. All these photographic films were digitalized using the MAMA scanning machine of the Observatoire de Paris, which provides a pixel size of 10 $`\mu m`$. The astrometric performances of the machine are described in Berger et al. (1991). The next step — identification of all point sources in these frames — was performed using SExtractor (Bertin & Arnouts 1996), a software dedicated to the automatic analysis of astronomical images using a multi-threshold algorithm allowing good object deblending. The detection of the stars was done at a 3-$`\sigma `$ level above the background. This software, which can deal with huge amounts of data (up to 60,000 $`\times `$ 60,000 pixels) is not suited for very crowded fields like the centers of the globular clusters, which were simply ignored. A star/galaxy separation was performed by using the method of star/galaxy magnitude vs. log(star/galaxy area). For each field, a ($`B`$ vs. $`BV`$) color-magnitude diagram was constructed, on which a field/cluster star selection was performed, following the method of Grillmair et al. (1995), since cluster stars and field stars exhibit different colors. In this way present and past cluster members could be distinguished from the fore- and background field stars by identifying in the CMD the area occupied primarily by cluster stars. The envelope of this area is empirically chosen so as to optimize the ratio of cluster stars to field stars in the relatively sparsely populated outer regions of each cluster. ### 10.3 Wavelet Analysis With the assumption that the data can be viewed as a sum of details with different typical scale lengths, the next step consists of disentangling these details using the space-scale analysis provided by the Wavelet Transform (WT, cf. Slezak et al. 1994; Resnikoff & Wells 1998). Any observational signal includes also some noise, which has a short scale length. Consequently the noise is higher for the small scale wavelet coefficients. Monte-Carlo simulations were performed to estimate the noise at each scale and apply a 3-$`\sigma `$ threshold on the wavelet coefficients to keep only the reliable structures. In this way it is possible to subtract the short-wavelength noise without removing details from the signal which has longer wavelengths. The remaining overdensities of the cluster-like stars, remaining after the application of the wavelength transform analysis to the star counts, are associated with the stars evaporated from the clusters because of dynamical relaxation and/or tidal stripping by the galactic gravitational field. It is worth emphasizing that in this study, the following strong observational biases were taken into account: (i) bias due to the clustering of galactic field stars; (ii) bias due to the clustering of background galaxies; (iii) bias due to the fluctuations of the dust extinction, as observed in the IRAS 100-$`\mu m`$ map. ### 10.4 Observational Results The most massive galactic globular cluster, $`\omega `$ Centauri (Meylan et al. 1995), currently crossing the disk plane, is a nearby globular cluster located at a distance of 5.0 kpc from the sun. Its relative proximity allows, for the star count selection, to reach the main sequence significantly below the turn-off. Estimates, taking into account the possible presence of mass segregation in its outer parts, show that about 0.6 to 1 % of its mass has been lost during the current disk shocking event. Although this cluster has, in this study, one of the best tail/background S/N ratios, it is by far not the only one exhibiting tidal tails. Considering all 20 clusters of the sample, the following conclusions are reached (see Leon, Meylan & Combes 2000 for a complete description of this work): * All the clusters observed, which do not suffer from strong observational biases, present tidal tails, tracing their dynamical evolution in the Galaxy (evaporation, tidal shocking, tidal torquing, and bulge shocking). * The clusters in the following sub-sample (viz. NGC 104, NGC 288, NGC 2298, NGC 5139, NGC 5904, NGC 6535, and NGC 6809) exhibit tidal extensions resulting from a recent shock, i.e. tails aligned with the tidal field gradient. * The clusters in another sub-sample (viz. NGC 1261, NGC 1851, NGC 1904, NGC 5694, NGC 5824, NGC 6205, NGC 7492, Pal 5, and Pal 12) present extensions which are only tracing the orbital path of the cluster with various degrees of mass loss. * NGC 7492 is a striking case because of its very small extension and its high destruction rate driven by the galaxy as computed by Gnedin & Ostriker (1997). Its dynamical twin for such an evolution, namely Pal 12, exhibits, on the contrary, a large extension tracing its orbital path, with a possible shock which happened more than 350 Myr ago. * The presence of a break in the outer surface density profile is a reliable indicator of some recent gravitational shocks. Recent CCD observations with the Wide Field Imager at the ESO/MPI 2.2-m telescope and with the CFH12K camera at the Canada-France-Hawaii 3.6-m telescope will soon provide improved results, because of the more accurate CCD photometry. These observations will allow more precise observational estimates of the mass loss rates for different regimes of galaxy-driven cluster evolution. ### 10.5 Numerical Simulations Extensive numerical N-body simulations of globular clusters in orbit around the Galaxy were performed in order to study quantitatively and geometrically the tidal effects they encounter and to try to reproduce the above observations. The N-body code used is an FFT algorithm, using the method of James (1977) to avoid the periodic images. With N = 150,000 particles, it required 2.7 seconds of CPU per time step on a Cray-C94. The globular clusters are represented by multi-mass King-Michie models, including mass segregation at initial conditions. The Galaxy is modeled as realistically as possible, with three components, bulge, disk and dark halo: the bulge is a spherical Plummer law, the disk is a Miyamoto-Nagai model, and the dark matter halo is added to obtain a flat galactic rotation curve. The main conclusions of these simulations can be summarized as follows (see Combes, Leon & Meylan 2000 for a complete description of this work): * All runs show that the clusters are always surrounded by tidal tails and debris. This is also true for those that suffered only a very slight mass loss. These unbound particles distribute in volumic density like a power-law as a function of radius, with a slope around –4. This slope is much steeper than in the observations where the background-foreground contamination dominates at very large scale. * These tails are preferentially composed of low mass stars, since they are coming from the external radii of the cluster; due to mass segregation built up by two-body relaxation, the external radii preferentially gather the low mass stars. * For sufficiently high and rapid mass loss, the cluster takes a prolate shape, whose major axis precesses around the z-axis. * When the tidal tail is very long (high mass loss) it follows the cluster orbit: the observation of the tail geometry is thus a way to deduce cluster orbits. Stars are not distributed homogeneously through the tails, but form clumps, and the densest of them, located symmetrically in the tails, are the tracers of the strongest gravitational shocks. Finally, these N-body experiments help to understand the recent observations of extended tidal tails around globular clusters (Grillmair et al. 1995, Leon et al. 2000): the systematic observations of the geometry of these tails should provide much information on the orbit, dynamics, and mass loss history of the clusters, and on the galactic structure as well. ## 11 G1 in M31: globular cluster or dwarf galaxy ? The globular cluster Mayall II $``$ G1, recently observed with the Hubble Space Telescope (HST) camera WFPC2 (Rich et al. 1996, Jablonka et al. 1999, 2000, Meylan et al. 2000), is a bright star cluster which belongs to our companion galaxy, Andromeda $``$ M31. Its integrated visual magnitude $`V`$ = 13.75 mag corresponds to an absolute visual magnitude $`M_V`$ = –10.86 mag, with $`E(BV)`$ = 0.06 and a distance modulus $`(mM)_{M31}`$ = 24.43 mag, implying a total luminosity of about $`L_V`$ $``$ 2 $`\times `$ $`10^6L_{}`$. The coordinates of G1, viz. $`\alpha _{G1}`$(J2000.00) = 00 32 46.878<sup>′′</sup> and $`\delta _{G1}`$(J2000.00) = +39 34 41.65<sup>′′</sup>, when compared to the coordinates of the center of M31, viz. $`\alpha _{M31}`$(J2000.00) = 00 42 44.541<sup>′′</sup> and $`\delta _{M31}`$(J2000.00) = +41 16 28.77<sup>′′</sup>, place it at a projected distance of about 3, i.e. 39.5 kpc from the center of M31. In spite of this rather large projected distance, both color-magnitude diagrams and radial velocities of G1 and M31, viz. $`V_r`$(G1) = – 331 $`\pm `$ 24 km s<sup>-1</sup> while $`V_r`$(M31) = – 300 $`\pm `$ 4 km s<sup>-1</sup> (21-cm HI line) and $`V_r`$(M31) = – 295 $`\pm `$ 7 km s<sup>-1</sup> (optical lines), completely support the idea that this cluster belongs to the globular cluster system of M31. Our ($`V`$ vs. $`VI`$) color-magnitude diagram reaches stars with magnitudes fainter than $`V`$ = 27 mag, with a well populated red horizontal branch at about $`V`$ = 25.25 mag; we confirm the existence of a blueward extension of the red horizontal branch clump as already observed by Rich et al. (1996). From model fitting, we determine a rather high mean metallicity of \[Fe/H\] = –0.95 $`\pm `$ 0.09, somewhat between the previous determinations of \[Fe/H\] = –0.7 (Rich et al. 1996) and \[Fe/H\] = –1.2 (Bonoli 1987; Brodie & Huchra 1990). From artificial star experiments, in order to estimate our true measurement errors, we observe a clear spread in our photometry that we attribute to an intrinsic metallicity dispersion among the stars of G1. Namely, adopting $`E(VI)`$ = 0.10 implies a 1-$`\sigma `$ \[Fe/H\] dispersion of $`\pm `$ 0.50 dex; adopting $`E(VI)`$ = 0.05 implies a 1-$`\sigma `$ \[Fe/H\] dispersion of $`\pm `$ 0.39 dex. In all cases, the intrinsic metallicity dispersion is significant and may be the consequence of self enrichment during the early stellar/dynamical evolution phases of this cluster. We have at our disposal two essential observational constraints allowing the mass determination of Mayall II $``$ G1: (i) First, its surface brightness profile from HST/WFPC2 images, providing essential structural parameters: the core radius $`r_c`$ = 0.14<sup>′′</sup> = 0.52 pc, the half-mass radius $`r_h`$ = 3.7<sup>′′</sup> = 14 pc, the tidal radius $`r_t`$ $``$ 54<sup>′′</sup> = 200 pc, implying a concentration $`c`$ = log ($`r_t/r_c`$$``$ 2.5 (Meylan et al. 2000). (ii) Second, its central velocity dispersion from KECK/HIRES spectra, providing an observed velocity dispersion $`\sigma _{obs}`$ = 25.1 km s<sup>-1</sup>, and an aperture-corrected central velocity dispersion $`\sigma _{}`$ = 27.8 km s<sup>-1</sup>. ### 11.1 King model and Virial mass estimates We can first obtain simple mass estimates from King models and from the Virial (see, e.g., Illingworth 1976). The first estimate, King mass, is given by the simple equation: $$\mathrm{King}\mathrm{mass}=\rho _cr_c^3\mu =167r_c\mu \sigma _{}^2$$ $`(8)`$ where the core radius $`r_c`$ = 0.52 pc, the dimensionless quantity $`\mu `$ = 220 for $`c`$ = log ($`r_t/r_c`$) = 2.5 (King 1966), and the central velocity dispersion $`\sigma _{}`$ = 27.8 km s<sup>-1</sup>. These values determine a total mass for the cluster of $`M_{tot}`$ = 15 $`\times `$ $`10^6M_{}`$ with the corresponding $`M/L_V`$ $``$ 7.5. The second estimate, Virial mass, is given by the simple equation: $$\mathrm{Virial}\mathrm{mass}=670r_h\sigma _{}^2$$ $`(9)`$ where the half-mass radius $`r_h`$ = 14 pc and central velocity dispersion $`\sigma _{}`$ = 27.8 km s<sup>-1</sup>. These values determine a total mass for the cluster of $`M_{tot}`$ = 7.3 $`\times `$ $`10^6M_{}`$ with the corresponding $`M/L_V`$ $``$ 3.6. ### 11.2 King-Michie model mass estimate The existing observational constraints allow the use of a multi-mass King-Michie model as defined by Equ. 6 above. See §4.1 above and Meylan et al. (1995) in the case of such a model applied to $`\omega `$ Centauri. In the case of G1, such a model is simultaneously fitted to the surface brightness profile from HST/WFPC2 and to the central velocity dispersion value from KECK/HIRES. An extensive grid of about 150,000 models was computed in order to explore the parameter space defined by the Initial Mass Function (IMF) exponent $`x`$, where $`x`$ would equal 1.35 in the case of Salpeter (1955), the central gravitational potential $`W_{}`$, and the anisotropy radius $`r_a`$. The IMF exponent consists actually of three parameters, $`x_{hr}`$, describing the heavy remnants, resulting from the already evolved stars with initial masses in the range between 0.85 and 100 $`M_{}`$; $`x_{MS}^{up}`$, describing the stars still on the Main Sequence, with initial masses in the range between 0.25 and 0.85 $`M_{}`$; and $`x_{MS}^{down}`$ describing the stars still on the Main Sequence, with initial masses in the range between 0.10 and 0.25 $`M_{}`$. Table 3 presents eleven of the 50 models with the lowest $`\chi ^2`$, illustrating some of the input and output parameters. Good models are considered as such not only on the basis of the $`\chi ^2`$ of the surface brightness fit (see Fig. 7), but also from their predictions of the observed integrated luminosity of the cluster and of the input mass-to-light ratio of the model. The different columns in Table 3 give, for each model, its IMF exponents $`x_{MS}^{up}`$ and $`x_{MS}^{down}`$; the fraction $`M_{hr}`$ of its total mass in the form of heavy stellar remnants such as neutron stars and white dwarfs; its concentration $`c`$ = log ($`r_t/r_c`$); its total mass $`M_{tot}`$ of the cluster, in solar units; and its corresponding mass-to-light ratio $`M/L_V`$ also in solar units. Since the velocity dispersion profile is reduced to one single value — the central velocity dispersion — the models are not strongly constrained, providing equally good fits to rather different sets of parameters. The IMF exponent $`x_{hr}`$, describing the amount of neutron stars, appears in all models to be very close to $`x`$ = 1.35 (Salpeter 1955). Given the lack of constraint from the absence of any velocity dispersion profile, the most reliable results are related to the concentration and the total mass. With a concentration $`c`$ = log ($`r_t/r_c`$) somewhere between 2.45 and 2.65, G1 presents clearly and in all cases the characteristics of a collapsed cluster. This is completely different from $`\omega `$ Centauri, the most massive but losse galactic globular cluster, which, with a concentration of about 1.3, has a very large core radius of about 5 pc and is consequently very far from core collapse. With a total mass somewhere between 13 and 18 $`10^6M_{}`$, and with the corresponding mass-to-light ratio $`M/L_V`$ between 6 and 9, G1 is significantly more massive than $`\omega `$ Centauri, maybe by up to a factor of three. The King-Michie mass estimates are in full agreement with the King mass estimate, while the Virial mass estimate is smaller by about a factor of two. It is worth mentioning that such a mass difference is not typical of G1: the same factor of about two is also observed between the King-Michie and Virial mass estimates of any cluster. See, e.g., Meylan & Mayor (1986) and Meylan et al. (1995) in the case of $`\omega `$ Centauri. ### 11.3 Mayall II $``$ G1 is a genuine globular cluster From these three various mass determinations (King, Virial, King-Michie), we can reach the following conclusions about Mayall II $``$ G1: (i) All mass estimates give a total mass up to three times as large as the total mass of $`\omega `$ Centauri; (ii) With $`c`$ = log ($`r_t/r_c`$$``$ 2.5, G1 is more concentrated than 47 Tucanae, which is a massive galactic globular cluster considered on the verge of collapsing; G1 has a surface brightness profile typical of a collapsed cluster; (iii) G1 is the heaviest of the weighted globular clusters. Given these results we can wonder if, even more than $`\omega `$ Centauri, G1 could be a kind of transition step between globular clusters and dwarf elliptical galaxies. There is a way of checking this hypothesis. Kormendy (1985) used the four following quantities — the central surface brightness $`\mu _{}`$, the central velocity dispersion $`\sigma _{}`$, the core radius $`r_c`$, and the total absolute magnitude M — in order to define various planes from combinations of two of the above four quantities, e.g., ($`\mu _{}`$ vs. log $`r_c`$). In all these planes, the various stellar systems plotted by Kormendy (1985) segregate into three well separated sequences: (i) ellipticals and bulges, (ii) dwarf ellipticals, and (iii) globular clusters. When plotted on any of these planes, G1 appears always on the sequence of globular clusters, and cannot be confused or assimilated with either ellipticals and bulges or dwarf ellipticals. The same is true for $`\omega `$ Centauri. Consequently, Mayall II $``$ G1 can be considered a genuine bright and massive globular cluster. Actually, G1 may not be the only such massive globular cluster in M31. This galaxy, which has about twice as many globular clusters as our Galaxy, has at least three other clusters with central velocity dispersion larger than 20 km s<sup>-1</sup> (Djorgovski et al. 1997). Unfortunately, so far, G1 is the only such cluster imaged with the high spatial resolution of the HST/WFPC2 camera, and consequently the only such massive cluster with known structural parameters. G1 and the other three bright M31 globular clusters represent probably the high-mass and high-luminosity tails of the otherwise very normal mass and luminosity distributions of the rich M31 population of globular clusters. ## 12 Conclusion This review summarizes only parts of the tremendous developments that have taken place during the last two decades. These recent developments are far from having exploited all the new capabilities offered by the impressive progress in computer simulations, made possible by more powerful single-purpose hardware and software (Hut & Makino 1999, Spurzem 1998). Observations too have still a lot of information to provide, which will require more elaborate modeling before full interpretation is reached. The mere observation of globular cluster stellar populations presents some puzzles which are far from being understood (Anderson 1997, 1999). The kinematical and dynamical understanding of globular clusters will need the exploitation of numerous radial velocities and proper motions of individual stars. Only small quantities of radial velocities have been painfully accumulated over the last two decades, while the proper motions have so far simply been ignored. But there is an enormous amount of untapped information locked in the radial velocities (for one third) and proper motions (for two thirds). Fortunately, a large amount of kinematical data (radial velocities and proper motions for a few thousand stars) will soon permit investigation of the 3-D space velocity distribution and rotation in the two largest galactic globular clusters, viz. $`\omega `$ Centauri and 47 Tucanae (Freeman et al., Meylan et al., both in preparation). ## Acknowledgments It is a pleasure to thank my following collaborators – T. Bridges (AAO), F. Combes (Paris), G. Djorgovski (Caltech), P. Jablonka (Paris), S. Leon (Paris), and A. Sarajedini (Wesleyan), – for allowing me to present some of our results in advance of publication.
no-problem/9912/cond-mat9912103.html
ar5iv
text
# A possible scenario for thermally activated avalanches in type-II superconductors ## 1 Introduction Magnetic flux penetrates type-II superconductors above a certain critical field $`H_{c1}`$ in the form of vortices. The interaction of these vortices with the pinning centers produces a magnetic flux profile inside the superconductor with a slope proportional to the critical current density inside the sample, $`j_c`$, defined as the maximum current density the material supports without dissipation, a situation which is accounted by the so called Bean’s Critical State Model . This picture, as de Gennes fristly noted , is very similar to the case of sandpiles, where a constant slope appears in the pile resulting from the competition between gravity and the friction between grains. In 1987, Bak et al proposed a theory – now known as Self Organized Criticallity theory (SOC) – to explain the existence of self similar structures in Nature. Since then, SOC has been used to interpret the dynamics of many size avalanches in sandpiles, earthquakes, evolution, and other phenomena, see for a general review. The ocurrence of self-organized criticallity was soon searched also in superconductors where field driven experiments have been designed and many numerical simulations developed , unfortunately without conclusive answers. Superconductors differ from most systems exhibiting SOC by the relevant role played by the temperature. The temperature causes the relaxation of the critical state leading to a nearly logarithmic magnetization decay, $`m(t)ln(t)`$ . In the early 90’s many researchers tried to relate the role played by the temperature to the existence of many size avalanches in relaxation experiments . However in 1995 Bonabeau and Lederer , approximately solved the difusion equation for the magnetic field inside a superconducting slab and demostrated that (within the usually accesible time scales in the experiments) it is impossible to determine the existence of thermally activated avalanches by classical magnetic relaxation measurements, i.e by the study of the decay of the mean value of the magnetization in the sample. In 1998 Aegerter studied the magnetic relaxation of a Bismuth single crystal but instead of the usually measured mean value of $`m(t)`$ he put attention to the fluctuations during the decay of the magnetization and showed evidences of power law distributed thermally activated avalanches. In this work we develop a simple scenario able to account for the existence of these thermally activated avalanches. This does not mean we claim for the existence of SOC during the relaxation of the magnetization. SOC is well defined only for a system in a marginal stationary state and it is not the case for the vortex lattice in the presence of thermal activation. What we are claimining is that, because of the complex interaction between vortices, the pinning centers and the temperature, many size avalanches of moving vortices may produce the relaxation of the critical state as previously determined in reference . The remaining of the paper is organized as follows. In the next section we describe the Cellular Automaton used in our simulations. In section 3 we present and discuss the numerical results. Then, section 4 is devoted to the study of the scaling relations between our distributions and those usually obtained in field driven experiments. In section 5 we outlined some conditions needed for the ocurrence of many sizes thermally activated avalanches and finally in section 6 the conclusions are given. ## 2 The model While the use of “real” forces between vortices in molecular dynamics simulations better resemble the experimental situation than simple cellular automata, they are by far more time consumming and it is an important drawback of the method, specially when we are looking for critical exponents or when we introduce the effects of the temperature in the system. Recently, to neglects these problems Bassler and Paczuski introduced a simple cellular automaton to study the behaviour of the vortex lattice in type-II superconductors. This cellular automaton avoids part of the relevant physics of the vortex lattice such as the variation of the pinning strenght with the increasing field, the possible mistmach between the vortex lattice and the pinning centers, the elasticity of the vortex lattice, etc. However, it contains the interaction between vortices and with the pinning centers, the long range order of the vortex interaction, first by the introduction of the parameter $`r`$ (see below), but also implicity assuming that each lattice cell contains more than one vortex. In addition it is able to predict the self organization of the lattice in a critical state characterized by power law distributed avalanches and the irreversibility of the magnetization. Then, aimed at describing the influence of the temperature on this critical state we adopt this model with some modifications. The cellular automata consists in a $`2D`$ honeycomb lattice, where each site is characterized by the number of vortices on it $`m(x)`$, and by its pinning strength $`V(x)`$ equal to 0 with probability $`p`$, and to $`q`$ with probability $`1p`$. The force acting on a vortex at site $`x`$ in the direction to site $`y`$ is calculated as: $`F_{xy}=V(x)+V(y)+(m(x)m(y)1)`$ $`+r[m(x_1)+m(x_2)m(y_1)m(y_2)]`$ (1) where $`x_1`$ and $`x_2`$ are the nearest neighbors of $`x`$ (other than $`y`$) while $`y_1`$ and $`y_2`$ are the nearest neighbors of $`y`$ (other than $`x`$) and $`r`$ is a measure of their contribution to the total force on the vortex $`x`$ $`(0<r<1)`$. A vortex in the site $`x`$, moves to its neighbor site $`y`$ if the force acting on it in that direction is greater than zero. If the force in more than one direction is greater than zero, then one of them is chosen at random . To introduce the effect of the temperature we assumed that sites where the forces are lower than zero still have a probability of motion given by: $$P_{xy}\mathrm{exp}(U(j)/kT)$$ (2) where $`k`$ is the Boltzman’s constant and $`T`$ is the temperature. The current, $`j`$, was locally calculated using the gradient of $`m(x)`$ and $`U(j)`$ represents different pinning barriers proposed in the literature $`U(j)=U_oj_c/j`$, $`U(j)=U_o\mathrm{ln}(j_c/j)`$ and $`U(j)=U_o(1j/j_c)`$ . An avalanche starts by randomly choosing a lattice site, and calculating (2). If it is smaller than a random number the procedure is repeated, else the vortex moves perturbing its neighbours. Then, the direction of motion of the new unstable vortices is calculated using (1). At this point, all the sites are updated in parallel until no more unstable sites persist. The avalanche size is defined as the number of topplings corresponding to the thermal activation of one vortex while the avalanche duration is defined as the number of updatings necessary to complete one avalanche. In all the cases the procedure was repeated for $`10^4m.c.s`$, were one $`m.c.s`$ was defined by the $`L^2`$ calculation of (2), and lattices up to $`L=200`$ were used. The initial configuration was obtained slowly adding vortices to the system (at $`T=0`$) until a critical slope was reached . The boundaries “parallel to the net vortex motion” were assumed periodic while the other two were fixed to mimic the applied external field. The magnetization, $`M`$ was calculated as the mean magnetic field inside the sample minus the external applied field , i.e., $$M=\underset{i=0}{\overset{i=L}{}}B(i)H$$ (3) where $`H`$ is the field, i.e. the number of vortices, at the borders of the lattice. ## 3 Numerical results Figure 1 shows typical relaxation curves obtained for systems of different sizes using a vortex glass-like potential $`U(j)j_c/j`$ and the algorithm described above. In Figure 2 is represented the relaxation curve for a system with $`T\mathrm{}`$, it means when we disregard the avalanche-like behavior previosly explained avoiding the calculation of equation (1) after a thermal jump. In both figures (see also the inset) three regimes are present, a plateau, then a logarithmic relaxation, and finally another plateau due to finite size effects. Only the time scales for these regimes are different, but this is irrelevant from the experimental point of view. So, as already noted before our results suggest that it is not possible to decide about the existence or not of thermally activated vortex avalanches from “simple thermodynamic magnetic” relaxation measurements. Other pinning potential as well as different $`Uo/kT`$ relations were used and no fundamental differences with the previous results were obtained. Figure 3 and 4 represent the integrated<sup>1</sup><sup>1</sup>1The meaning of this name will be clarified below distribution of avalanche sizes, $`D_{int}(s)`$, and the integrated distribution of avalanche times, $`D_{int}(t)`$ obtained using a classical Anderson-Kim potential, $`U(j)=U_o(1j/j_c)`$ , for a system with $`L=200`$ and $`U_o/kT=10`$, as before, other pinning potentials were also used, resulting in a similar behavior. These distributions were obtained using the avalanche sizes and times (defined in section 2) obtained during all the relaxation process. As figure 3 an 4 clearly show, many size avalanches emerge. It does not mean the system is critical, instead it is relaxing from a critical state to its corresponding thermodynamic equilibrium. What we are showing is that this relaxation could proceed by means of many size avalanches in accordance with recent experimental results . However, somehow more surprisingly, we will show in the next section that the exponents characterizing these distributions are related through simple scaling relations to the exponents derived in the context of SOC for systems in a critical state . Considering that $`D_{int}(s)`$ follows a power law: $$D_{int}(s)s^{\tau _n}$$ (4) the estimated exponent form figure 3 was $`\tau _n=2.70\pm 0.1`$ (different form the $`\tau =1.63`$ obtained in references and for a field driven experiment) and, assuming $`D_{int}(t)t^{\tau _{tn}}`$ for the integrated distribution of avalanche times, we got from figure 4, $`\tau _{tn}=4.0\pm 0.2`$. It is worth to mention here that the exponent $`\tau _n`$ was also experimentally determined in reference and reported as $`\tau _n=2.0`$, lower than our value. This divergence can be explained since figures 3 and 4 represent the distribution of avalanches obtained for all the relaxation process, i.e starting at the critical state and finishing at equilibrium, a situation imposible to account for in real experimental situations. To determine the distribution of avalanche sizes, using just part of the relaxation curves, gives different numerical estimates for $`\tau _n`$ and $`\tau _{tn}`$ as is evident from figure 5. In fact, figure 5 represents five avalanche size distributions, $`P(s)`$, obtained for different time intervals of the relaxation curve, from the upper to the lower curve, $`t=110`$, $`t=11100`$, $`t=1011000`$, $`t=100110000`$ and $`t=10001100000`$ m.c.s, which superposition corresponds to the full relaxation of the system (see figure 1). The straight line represents the integrated distribution of avalanche sizes, $`D_{int}(t)`$, obtained in figure 3, $`\tau =2.7`$. Then, from the figure we can conclude that can be predicted different exponents depending on the range of time measured. For low enough times, the exponent is lower than that associate to $`D_{int}`$, while for large times a peaked distribution is obtained with only very small avalanches. Another source for discrepancies between our numerical estimates and experimental situations comes from the change of regimes of relaxation. In fact, there is not a priori justification to assume that many size avalanches will dominate the relaxation process within the all range of $`j`$ and $`T`$, a situation that was deeply analyze in references and is discussed in a different context in section 5. ## 4 Scaling relations Rather than to introduce directly the derivation of our scaling relations we prefer to start with a short review of some important scaling concepts of the theory of self organized criticallity. Following the first ideas of Bak and collaborators, those systems who behave as predicted by SOC show a distribution of avalanche sizes and times that follow power laws, i.e , $$P(s)s^\tau $$ (5) and $$P(t)t^{\tau _t}$$ (6) respectively. For systems not exactly in the critical state, these expresions transform into: $$P(s)s^\tau f(s/s_c)$$ (7) and $$P(t)t^{\tau _t}f(t/t_c)$$ (8) where $`s_c`$ and $`t_c`$ reflect the departure of the system from criticality, $`s_c(j_cj)^{1/\sigma _1}`$ and $`t_c(j_cj)^{1/\sigma _2}`$ being $`\sigma _1`$ and $`\sigma _2`$ new critical exponents and where the function $`f(x)`$ has the following properties $`f(x)cte`$ if $`x0`$ and $`f(x)0`$ if $`x\mathrm{}`$ in order to recover the “critical picture” when $`jj_c`$. In finite size systems $`s_c`$ and $`t_c`$ also reflect the effect of the sample dimensions through two new critical exponents $`D`$ and $`z`$. In fact, in analogy with the theory of critical phenomena $`s_cL^D`$ and $`t_cL^z`$. Furthermore, the coherence length $`\xi `$ diverges at the critical state as: $$\xi (j_cj)^\nu $$ (9) From the definitions of $`s_c`$, $`t_c`$ and the equation (9) it is straighforward to show that $`s_c\xi ^{1/\nu \sigma _1}`$ and $`t_c\xi ^{1/\nu \sigma _2}`$. Moreover, since for a finite size system at the critical state $`\xi =L`$, our first scaling relation takes the form: $$\frac{D}{z}=\frac{\sigma _1}{\sigma _2}$$ (10) As already discussed above, the integrated distributions of avalanche sizes and times, calculated in section 2, $`D_{int}(s)`$ and $`D_{int}(t)`$ result, since the system is relaxing, from avalanches obtained for values of current densities ranging from $`j_c`$ to $`j`$. These distributions are different from those obtained in typical field driven experiments or simulations since the last are obtained “in principle” for a fixed value of current density $`j_c`$ which, indeed, determines the criticality of the system. Then, it is natural to assume that $`D_{int}(s)`$ and $`D_{int}(t)`$ are related to the distributions obtained just at the critical state, $`D(s)`$ and $`D(t)`$, by the following formulae: $$D_{int}(s)_{j_c}^0s^\tau f(s/s_c)𝑑j$$ (11) and $$D_{int}(t)_{j_c}^0s^{\tau _t}f(t/t_c)𝑑j$$ (12) which inmediately explain the meaning of the label “integrated” used for these distributions. Then substituing the definitions of $`s_c`$ and $`t_c`$ in equations (11) and (12) and after a simple change of variables, we obtain the following expresions for the integrated distributions of avalanche sizes and times: $$D_{int}(s)=s^{\tau +\sigma _1}_0^{s(j_c)^{1/\sigma _1}}\sigma _1x^{\sigma _11}f(x)𝑑x$$ (13) $$D_{int}(t)=s^{\tau _t+\sigma _2}_0^{s(j_c)^{1/\sigma _2}}\sigma _2x^{\sigma _21}f(x)𝑑x$$ (14) which prove that for $`s`$ large enough both integrals are constants, and there is not a cut-off length in the integrated distributions, result already obtained in our simulations (see figures 3 and 4). Also from equations (13) and (14) and the definitions of $`D_{int}(s)`$ and $`D_{int}(t)`$ we can inmediatly obtain the following scaling relations $$\tau _n=\tau +\sigma _1$$ (15) $$\tau _{tn}=\tau _t+\sigma _2$$ (16) which in combination with (10) leads to: $$\frac{D}{z}=\frac{\tau _n\tau }{\tau _{tn}\tau _t}$$ (17) In this way expresion (17) establishes a connection between the exponents obtained in field driven experiments or simulations, $`\tau ,\tau _t,D,z`$ and those from thermally activated avalanches $`\tau _n,\tau _{tn}`$. In fact, the results obtained in our simulations and those obtained in references hold the previous relation. However, some points deserve further discussion. The power law divergence of $`s_c`$, $`t_c`$ and $`\xi `$ are strictly valid close to the critical state, $`j_c`$. Far away from this state these divergencies no longer exactly hold, however considering the good results obtained in the check of our calculations and the scaling law (17) we believe that this last assumption is not relevant for the solution of the model. Also, the scaling law (17) was obtained assuming the complete relaxation of the system, so it is difficult to be proved in real experiments. ## 5 Applicability Our previous picture assumes that a thermally activated vortex jump would affect its neighborhood generating an unstability that leads to a cascade of vortex jumps related to the vortex distribution into the sample. However, it is well known the existence of a characteristic time for thermally activated phenomena $`t_{th}=t_o\mathrm{exp}(U(j)/kT)`$ representing the time a vortex spends at a pinning site before jumping due to thermal activation . This means that our model will be valid if these avalanches occur within times lower than $`t_{th}`$, i.e. the avalanches should develop fast enough to be mutually independent. This resembles the idea developed by Vespignani et al in the context of sandpile and forest fire models. They showed through simulations and mean field considerations that one necessary condition for the occurrence of SOC, at least in these models, is the separation of time scales between the external exitation and the response of the system. Then, as mentioned above, the maximum time an avalanche persits is, $`t_c=t_{co}(1j/j_c)^{1/\sigma _2}`$ where $`t_{co}`$ is the time a vortex spends moving from one site to another, and of course depends on the local current and flux density in the system. Considering that the vortices are separated a distance $`a`$ the time they spend traveling this distance is $$t_{co}=\frac{a}{v}$$ (18) where $`v`$ depends on the Lorentz force acting on the vortex $`v=j\mathrm{\Phi }_o/\eta `$, and $`a=(\mathrm{\Phi }_o/B)^{1/2}`$ which inmediatly gives the following dependence of $`t_{co}`$ with $`j`$ and $`B`$. $$t_{co}=\frac{\eta }{j\sqrt{(\mathrm{\Phi }_oB)}}$$ (19) In the critical state $`B`$ varies along the sample. This variation is, even in the presence of thermal activation, very well accounted by the Bean model . This means that for a fully penetrated sample $$B(x)=\mu _oH\mu _ojx$$ (20) where $`H`$ is the external field. Then, substituing equations (19) and (20) in the definition of $`t_c`$ we obtain that $$t_c=\frac{\eta (1j/j_c)^{1/\sigma _2}}{j\mathrm{\Phi }_o^{1/2}\sqrt{\mu _oH\mu _ojx}}$$ (21) Now, to determine the regime of applicability of our model, we must verify under which conditions the inequality $`t_c<<t_{th}`$ holds. From the experimental point of view, the relevant avalanches to be detected measuring the fluctuations in the magnetization decay are those starting at the border of the sample, since are those who produce changes in $`m`$. Moreover, the avalanches starting at the border of the sample are also those with larger duration times since they have a larger area for spreading, (remember that the critical state in a type-II superconductor is symmetric with respect to the center of the sample). Then, we may assume in (21) $`x=0`$ and obtain the following inequality: $$\frac{\eta (1j/j_c)^{1/\sigma _2}}{j\mathrm{\Phi }_o^{1/2}\sqrt{\mu _oH}}<<t_oexp(U(j)/kT)$$ (22) which can be written as: $$\frac{j^{}}{j}\frac{1}{(1j/j_c)^{1/\sigma _2}}<<exp(U(j)/kT)$$ (23) where $`j^{}=\frac{\mathrm{\Phi }_o\eta }{\sqrt{H}}t_o`$. Assuming, for example $`U(j)=U_o\mathrm{ln}(j_c/j)`$ , the previous inequality takes the form: $$\frac{j^{}j^{\alpha 1}}{j_c^\alpha }(1j/j_c)^{1/\sigma _2}<<1$$ (24) $`\alpha =U_o/kT`$. It is then straighforward to demostrate that equation (24) holds under the following conditions, if $`\alpha >>1`$ $`j`$ must be much lower than $`j_c`$ ($`j<<j_c`$). In the opposite case $`j^{}<<j<<j_c`$. Similar expresions can be derived for the Anderson-Kim potentials and from potentials derived from the Collective Pinning Theory. These conditions are the consequence of the competition between the increase of $`t_{th}`$ when $`j0`$ and the divergence of $`t_c`$ when $`j0`$ and $`jj_c`$, see equation (21), and can be interpreted in the following way. Close to $`j_c`$ the avalanche durations are very high because the avalanche sizes become huge, so one always need to be far from $`j_c`$, a situation often accounted in high temperature superconductors , to assure that $`t_{av}<<t_{th}`$. Particularly for $`U_o<<kT`$, when thermally activated jumps become frequent ($`t_{th}`$ small), high enough currents ($`j>>j^{}`$) are also neccesary to assure rapid vortex motion during the avalanche, and hence low avalanche time durations. From the experimental point of view these conditions should be seen with some caution. For example, since $`j_c`$ decays with temperature , for $`U_o<<kT`$, the range of current densities where thermally activated avalanches could appear is still narrower than that suggested by a simple inspection to the formula $`j^{}<<j<<j_c`$ so, we strongly recommend to look for these avalanches at low temperatures and in very disordered systems were $`U_o>>kT`$. In the light of these results it is useful to come back to the experiment of Aegerter . He found one critical exponent characterizing the avalanche size distribution during the relaxation of the magnetization and that this exponent was independent of the temperature of the system. Neither of these results contradict our model. Even when his critical exponent was 2.0 and our $`\tau _n=2.7`$, figure 5 indicates that small exponents are associated to small relaxation times in our model. This suggest that, if in the experiment of reference the time window had been shifted to larger times, and exponent closer to our one would have been observed. This does not mean, of course, that such a shift can be trivially performed in the practice. In addition he found, during the relaxation, one initial regime where avalanches are not power law distributed. While he explained this due to a transient period the system takes to reach the SOC, our results suggest a different explanation. During this period the system is still too close to the critical state $`jj_c`$, and power law avalanches are not yet developed since the thermally activated avalanches overlap each other. This explanation is consistent with the long time associated to this transient period, and to the dependence of this time with the temperature. Experimentally Aegerter found that larger times are associated with larger temperatures and in fact, in our model larger temperatures imply the neccesity of lower values of the relation $`j/j_c`$ to found power law distributed avalanches and this means larger transient periods. ## 6 Conclusions In conclusion, we developed a simple scenario to explain recently reported thermally activated avalanches power law distributed for type-II superconductors. We proved that the exponents associated with these distributions depend on the time interval of the measurement. We also proved that the exponents characterizing a distribution of thermally activated avalanches obtained during the whole relaxation experiment, (i.e, from the critical to the equilibrium states), are related to those obtained in field driven experiments by scaling relations, a situation also supported by our simulations. The conditions for the appearance of these avalanches were discussed and it was also proved that, in a rough approximation, they do not depend on the pinning mechanism in the sample. All ours theoretical predictions are consistent with the known experimental results. ## Acknowledgments We are very grateful to A. Vázquez for many interesting discussions and suggestions. We acknowledge also usseful comments from M. Paczuski, K. Bassler, D. Domínguez, E. Osquiguil, O. Sotolongo and C. Rodríguez. E. A. acknowledges partial financial support from the World Laboratory Center for Pan-American collaboration in Science and Technology, the Texas Center for Superconductivity, and the Department of Physics, University of Houston. ## Figure captions Figure 1 Magnetic relaxation curves for systems of sizes $`L=60,100,200`$. $`U(j)j_c/j`$, $`U_o/kT=\mathrm{}`$. The inset shows the data collapse of the curves. Figure 2 Magnetic relaxation curves for systems of sizes $`L=60,100,200`$. $`U(j)j_c/j`$, $`U_o/kT=10`$ The inset shows the collapse of the curves. Figure 3 Avalanche sizes distribution for $`L=200`$, $`U(j)(1j/j_c)`$, $`U_o/kT=10`$. Figure 4 Avalanche times distribution for $`L=200`$, $`U(j)(1j/j_c)`$, $`U_o/kT=10`$. Figure 5 Avalanche size distribution for $`L=200`$, $`U(j)j_c/j`$, $`U_o/kT=10`$. From the upper to the lower curve: $`t=110`$, $`t=11100`$, $`t=1011000`$, $`t=100110000`$ and $`t=10001100000`$ m.c.s. The straigh line represents a power law with exponent 2.7.
no-problem/9912/cond-mat9912263.html
ar5iv
text
# Notes on Decoherence at Absolute Zero ## I Introduction Decoherence is the process which—through the interaction of the system with external degrees of freedom referred to as an environment—sustains a loss of quantum coherence in a system. It defines the transition from quantum behavior of a closed system, which thus possesses unitarity or time reversibility and displays interference due to the superposition of its wave function, to the classical behavior of the same as an open system; the loss of unitarity or time-reversal symmetry leads to a loss of interference. This openness comes from the coupling of the quantum system to an environment or a bath. A closed system, on the other hand, does not undergo decoherence. The quantum system in question could be an electron whereas the environment could be thermal phonons or photons, and even other electrons whose properties are not measured. The coarse-graining of the irrelevant degrees of freedom defining the environment, which are not of interest to the measurement, generates both dissipation and decoherence: the latter formally related to the decay of the off-diagonal terms of the reduced density matrix operator denoting the quantum system. The interpretational problem with decoherence, and in fact the notion of decoherence itself, vanishes when one treats the system-environment combination as one indivisible quantum object. The combination is closed and evolves unitarily according to the laws of quantum mechanics transforming pure states into pure states, hence there is no decoherence. The problem only arises in the splitting of the whole as “a system of interest” to the observer or the experiment, and the remaining degrees of freedom as “the environment”. This split is necessary and must be acknowledged from the observer’s or experiment’s perspective. Interestingly, a pure state of the closed combination is compatible with each part being in mixed states. Decoherence is obtained by considering the density matrix operator for the combination and partially tracing out the irrelevant degrees of freedom, namely those of the environment. The reduced density matrix operator then represents the “effective” system alone as a statistical mixture, which is of interest to a measurement in an experiment. An initially isolated system inevitably loses quantum coherence due to its coupling to a complex or a “large” environment with very many degrees of freedom. When both the system and the environment are treated quantum mechanically, the quantum entanglement becomes an important concern for the loss of coherence. The loss of coherence of an electron inside a disordered conductor occurs due to the interaction with environments: its coupling to localized spins—pseudo or magnetic, electron-phonon interactions and electron-electron interactions, the latter being dominant at low temperature. Conventional theories decree that the suppresion of coherence, characterized by a decoherence rate $`1/\tau _\varphi `$ vanish with decreasing temperature, ultimately giving a Fermi-liquid ground state. However, in experiments a finite decoherence rate is observed at low temperatures, which perhaps persists down to $`T=0`$. Considering the consequences of such an observation, to be discussed in sections 3 and 4, it is imperative to put the experimental observation on firm ground. Towards that end, our experimental observation of $`\tau _\varphi `$ saturation has undergone extensive experimental checks detailed in section 2. Corroborative problems in mesoscopics denoting severe discrepancies between experiments and the conventional theories are outlined in section 3; a connection between these discrepancies and $`\tau _\varphi `$ saturation is made. In the final section, zero temperature decoherence and the role of quantum fluctuations of the environment is put in a broader perspective. It is argued that zero temperature decoherence observed in low-dimensional electronic systems is important in understanding various low temperature properties of metals, acceptance of which as an intrinsic effect appears imminent. ## II Electron and its environments: Measurement of electron decoherence rate Inside a disordered conductor, an electron undergoes various kinds of interference. The interference of two paths in a doubly-connected regime gives an Aharonov-Bohm correction to the electron conductance, which can be modulated periodically as a function of the applied field. Similarly, interference correction arising from paths inside a conductor in a singly-connected regime gives reproducible conductance fluctuations. If the interfering paths are a time-reversed pair, then the correction to the conductance gives weak localization which can be suppressed by the application of a magnetic field. Persistent current is also observed due to interference in isolated metal rings. Interference due to phase coherence in the electron wave function can be studied using any of these effects if the exact dependence of the measured quantity can be explicitly expressed in terms of a decoherence rate $`1/\tau _\varphi `$. Weak localization correction, though the least exotic of the effects mentioned above, gives a single parameter estimate of decoherence rate $`1/\tau _\varphi `$ without any further assumption regarding the effect. Physically, it is meaningful then to imagine the breaking of time-reversal symmetry and the emergence of non-unitarity as the suppression of interference between the time-reversed paths by an applied magnetic field. Fig. 1 displays a small representative of a vast body of data available in the literature. What is observed in the experiments is the following: (a) At high temperatures the decoherence rate $`1/\tau _\varphi `$ is temperature dependent due to various mechanisms such as electron-phonon and electron-electron interactions, but at low temperatures the rate inevitably saturates, suggesting the onset of a temperature independent mechanism. (b) The limiting rate $`1/\tau _0`$ and the temperature at which it dominates vary over a wide range depending on the system, though a one-to-one correlation with the sample parameters such as the diffusion constant $`D`$, or resistance per unit length $`R/L`$ can be made very accurately. A compilation of some saturation data in various systems is contained in ref.. In view of these experiments it seems plausible that the observed saturation could be a real effect. Such a hypothesis must be thoroghly investigated, since the saturation of decoherence rate suggesting an intrinsic decoherence is known to have serious consequences. To that end, we have performed various control experiments which suggest that this limiting mechanism is not due to any artifacts and is intrinsic. Extensive checks for the role of various artifacts include the following: ### A Heating of the system: Loss of thermal contact of the electron in the sample with the cryostat would imply that the temperature of the sample is locked at the apparent saturation temperature $`T_0`$. In the experiment, the electron temperature was determined by measuring the electron-electron interaction(EEI) correction to the conductivity at a magnetic field strong enough to quench weak localization. Electron temperature was found to be in equilibrium with the cryostat to within a temperature of an order of magnitude less than $`T_0`$. ### B Magnetic impurities: Magnetic impurities such as iron(Fe) in a host metal of gold(Au) were shown not to cause saturation, contrary to an earlier notion and consistent with other experiments. A detailed study revealed interesting properties of Kondo systems in quasi-1D systems, different from the anticipated behavior for the bulk Kondo systems. ### C External high-frequency noise: Initial checks, confirmed by subsequent controlled experiments, showed that externally generated high frequency(HF) noise did not cause dephasing before heating the sample to a substantially higher temperature. A similar control experiment on the saturation of $`1/\tau _\varphi `$ in quantum dots reached the same conclusion. ### D Two-level systems: Recently an argument was made that nonmagnetic impurities, which in principle give rise to a dynamic or time-dependent disorder, could be responsible for the observed saturation; such defects, usually modeled as two-level systems (TLS), result in the usual low-frequency $`1/f`$ noise in conductors. For the following reasons TLS can be ruled out as the effective environment in our experiments: (a)A typical level of noise power of $`10^{15}W`$ at $``$ 1 GHz ($`\omega \tau _0^1`$), required for dephasing, would suggest a power level of 1 $`\mu W`$ or higher at low frequencies (1 mHz-10 Hz). At such high power levels one would anticipate the observation of low-frequency switching or hysteresis. Neither phenomenon was observed in our experiment on timescales of months. (b)Another reason for the TLS to be ineffective in our gold samples is the signature of mesoscopic dimensions in the temperature dependence, contrary to an expected bulk dependence as in any “Kondo-like” theory. (c) In the model, $`\tau _\varphi T^1`$ in the temperature dependent regime, whereas in the experiment $`1/\sqrt{T}`$ dependence was observed for most of the metallic samples. Another construct based on the presence of dynamical nonmagnetic impurities or TLS suggests that the coupling between the TLS and the electron in a metal could give finite scattering even at T=0 in the non-Fermi-liquid regime, i.e. below the corresponding Kondo temperature $`T_K`$. In this clever construct it is expected, above and beyond the anticipated behavior of TLS discussed earlier, the observed saturation rate will be non-unique and history dependent. But no dependence on history or on annealing was observed over a period of months in our experiments. For these reasons two-level systems are not thought to be relevant to our observed saturation. ### E Openness to external phonons in the leads: This nonequilibrium effect arises because of the contact leads to the sample, necessary for measurement. It has been suggested, based on earlier arguments, that due to electron-phonon coupling phonons in the leads exist as an inevitable extrinsic environment. The associated phonon emission process gives an effective lifetime to the electron. It is argued that low temperature saturation is determined by the contact geometry and configurations, and the dependence at high temperature is determined by material properties. First, in our experiments, in anticipation of such a possibility, the 2D contact pads were fabricated at least a length of 3-5 $`L_\varphi `$ away from the four-probe part of the sample. Leads to the 2D pads of this length had the same geometry as the sample itself. The effect of 2D pads in the weak localization traces was not detected, and the traces were very different from the 2D weak localization functional form. For the high temperature part, a large body of data compiled in ref. shows the lack of material dependence of $`\tau _\varphi `$. The dependence is due to very different diffusion constants and other sample parameters in the systems compared. Finally, description of the saturation value in terms of only intrinsic parameters of the sample argues against the effectiveness of the proposed mechanism in our experiments. ### F Other artifactual environments: There are suggestions that gravity is an inevitable environment, making every system essentially open. It has also been argued that the nuclear magnetic moment of gold–representing nuclear degrees of freedom–may provide an effective environment for temperature independent decoherence. The last two suggested mechanisms have been ruled out by incorporating experiments on different materials, and by the observation of the obvious parametric size dependence in the same material(Au). After considering most of the extraneous effects it was concluded that the observed saturation of $`\tau _\varphi `$ in our experiments is an intrinsic effect. ## III Manifestation of zero temperature decoherence in mesoscopic physics If the premise is assumed, for the sake of arguments in this section, that the temperature independent dephasing of electrons is intrinsic, then the saturation of $`\tau _\varphi `$ must manifest itself ubiquitously in low-dimensional electron systems by behavior including but not limited to low temperature saturation of the appropriate physical quantity. ### A Saturation of $`\tau _\varphi `$ in all dimensions: The data in Fig.1 show saturation of $`\tau _\varphi `$ in quasi-1D and 2D disordered conductors. Recent experiments report the observation of saturation in $`\tau _\varphi `$ in open ballistic quantum dots, representing 0D systems, below a temperature of 100 mK in one set of experiments, and below 1K in another set of experiments. $`\tau _\varphi `$ in 3D amorphous Ca-Al-X (X=Au,Ag) alloys also saturates below 4K. ### B Other manifestations in quasi-1D: persistent current and e-e interaction: It is understood that the saturation of $`\tau _\varphi `$ would imply a similar saturation in the electron-electron interaction (EEI) correction to the conductivity, measured at a finite field with the weak localization contribution quenched. The saturation temperature for EEI correction should be lower, and comparable to $`\mathrm{}/\tau _0`$. Experiments do show such a saturation and a strong correlation between the EEI saturation temperature and $`\tau _0`$. Saturation of $`\tau _\varphi `$ also offers solution to the problem of persistent current in normal metals, namely that the observed current is too large and diamagnetic. In experiments, the range of temperature in which a persistent current is measured is indeed the same where $`\tau _\varphi `$ is saturated. Intrinsic high-frequency fluctuations—responsible for $`\tau _\varphi `$ saturation—will imply the presence of a non-decaying diffusion current, corresponding to the persistent current with a size comparable to $`e/\tau _DeD/L^2`$. ### C Transition from weak-to-strong localization in quasi-1D conductors: A finite decoherence rate at zero temperature is expected to stop the Thouless transition from weakly to strongly localized states. This disorder-driven transition to localized states in quasi-1D, with the characteristic length scale of $`\xi `$ has two possible courses depending on the competing length scale of diffusion characterized by $`L_\varphi `$ at $`T=0`$: (i) Complete suppression (no transition at all, $`L_0\xi `$); (ii) Inihibition (activation with decreasing temperature–denoting a transition to a strongly localized state– inevitably saturates: $`L_0\xi `$ in the experimental range). Both aspects have been well documented in experiments on $`\delta `$-doped GaAs wires and GaAs-Si wires. ### D Lack of one-parameter scaling: One-parameter scaling theory of localization, the foundation for the theory of low-dimensional conductors, requires phase coherence length to diverge as a negative power of $`T`$: $`L_\varphi T^{p/2}`$. A finite temperature-independent decoherence length $`L_0`$ immediately suggests breakdown of the one-parameter scaling theory. Experiments on Si-MOS systems have convincingly shown the lack of one-parameter scaling. ### E Metallic behavior in 2D systems: In contrast to the conventional theory of metals which purports that 2D systems at $`T=0`$ become insulators with zero conductivity, recent experiments find metallic behavior at low temperatures. Furthermore, at low temperatures the conductivity of the metallic state is observed to saturate with a finite value. However, a nonvanishing decoherence of the electron would suggest finite diffusion of the electron, and hence no Fermi liquid ground state or insulating state with zero conductivity at $`T=0`$. With decreasing temperature, localization driven by disorder is suppressed, sometimes even before the onset by zero-temperature dephasing, depending on the competition. Formation of insulating states is inhibited by diffusion induced by the zero temperature decoherence, irrespective of the initial states. The quantum-hall-to-insulator transition is in some sense similar to the transition in quasi-1D or 2D conducting systems. A quantum-hall system beyond a critical field $`B_c`$ becomes insulating with a diverging $`\rho _{xx}`$ as T is reduced. However, formation of this insulating state is expected to be inhibited with a low $`T`$ saturation of the increasing $`\rho _{xx}`$. Such a saturation has been observed, and, on the basis of a recent theory, it is related to a finite dephasing length at low T; This may perhaps be the size of the puddle in the quantum-hall liquid. Likewise in superconductor-to-insulator transition in 2D a-MoGe films a similar leveling of the resistance was observed with the conclusion that the saturation is due to the coupling to a low temperature dissipative environment. ## IV Counterpoint to conventional theories The conventional theory of metals, specifically in low dimensions, is based on the scaling laws of localization augmented by the perturbative treatment of interaction. The very nature of these theories requires that the phase coherence length diverge with decreasing temperature according to a power law, $`L_\varphi T^{p/2}`$, for some positive $`p`$. The early phenomenological motivation of such a diverging form at low T was formalized in a perturbative calculation of dephasing length. The structure of the Fermi liquid picture, that the electron interaction can be treated as low-lying excitations of a non-interacting system while maintaining the Fermi liquid ground state at $`T=0`$, is fully retained even in the presence of disorder at low dimensions. Our experimental observation of $`\tau _\varphi `$, or equivalently $`L_\varphi `$, saturation argues against the premise of the conventional theory, and it contradicts the supporting theory of electron dephasing in low dimensions. In the last two sections, a phenomenological case is made against the premise of the conventional theory. In the following we briefly discuss the lack of validity of these theories at low temperatures. Let us just consider the electron-phonon interaction for the sake of argument. In a conductivity experiment only the scattering rate of the electron is measured, which includes electron-phonon scattering. Traditionally, the relevant phonon states available for an electron to scatter off depends on temperature $`T`$ via thermal population. As $`T0`$, this population shrinks to zero, making the scattering rate of the electron vanish. By this argument most scattering mechanisms yield vanishing scattering rate at $`T=0`$, where the states to be scattered off are thermally populated. Non-thermal scattering processes obviously do not have to vanish at $`T=0`$. The phase shift in the electron wave function $`\delta \varphi `$ arising out of electron scattering, say off the phonons in a phonon bath, is random ($`\delta \varphi =0`$) and on averaging it produces a dephasing effect as a suppression of the interference term by a factor $`e^{t/\tau _\varphi }e^{i\delta \varphi }`$. This indicates that (a) phase shifts arise only in presence of a thermal population, and (b) the bath of phonons itself does not undergo any change which might have an effect or back reaction on the electron; in other words, there is no entanglement between the electron and the bath. The last two statements are often phrased differently: the electron acquires phase shifts due to its coupling to equilibrium fluctuations of the bath, and the vanishing population through which T enters in the equation has to satisfy the law of detailed balance. This is a point of view, and a limited one at best, for the following reason. If one starts with a ground state of the electron and the ground state of the environment and the coupling is turned on, then the product state evolves in such a way, even at zero temperature, that after a certain time the electron is no more in its ground state entirely; there is a fractional probability of finding the electron in its ground state. In other words, the electron can be described only by a mixed state of both the environmental variables and the electron variables. The electron can be measured only after the integration of the irrelevant environmental variables, the very process that introduces decoherence. What is measured in above-mentioned experiments is not a property of the combined system of the electron and environment. In the measurement process the environmental degrees of freedom are averaged out, the effect of averaging is still retained in the measured quantity. Thus the electron cannot be considered to be a closed system, and the notion of a unique ground state in such a case is meaningless. An electron must exhibit zero temperature decoherence if it is coupled to a phonon bath; the problem is isomorphic to the Caldeira-Leggett model which does indeed show zero temperature decoherence. The same is true for an electron coupled to a fluctuating electromagnetic field, representing electron-electron interaction, in spite of complications due to the Pauli exclusion principle. To summarize the case against conventional theories, (a) experimental evidence is overwhelmingly against, (b) a quantum mechanical treatment of the problem does give agreeable results, (c) certain other outstanding problems can be understood with the notion of zero temperature decoherence, and finally (d) the basic theory of decoherence in an exactly solvable model of Caldeira-Leggett is contrary to the conclusions of these theories. ## V Endnotes: Quantum fluctuations and decoherence To explain the results of the experiments, it was suggested that high-frequency fluctuations of quantum origin could indeed cause the saturation. Following the well-established concept, of dephasing of an electron by “classical” electromagnetic field fluctuations, it is reasonable to consider decoherence due to the coupling of the electron to quantum fluctuations of the field. Such an extension is not new, and is well known in quantum brownian motion. A particle coupled linearly to a bath of oscillators, all in their individual ground states, with a linear coupling, shifts the equlibrium position of individual oscillators without exciting them. The resulting back reaction on the particle causes both dissipation and decoherence even at absolute zero, the latter quantified by the decay of off-diagonal elements of the reduced density matrix in the long time limit. A similar construct has been made earlier in the mesoscopic context. The cut-off dependent result is universal in the mesoscopic models as well as in quantum brownian motion models. The initial back-of-the-envelope calculation surprisingly described the saturation rate observed in many experiments. The rigorous and commendable calculations which verified the notion have been severely criticized. Though the latter calculations are self consistent, the theories fail at the starting point. A pedestrian argument against the use of the “law of detailed balance” is that it describes only thermal transitions. To understand zero temperature effects one must add a non-thermal part, put in by hand, as is normally done for spontaneous emission in the Einstein rate equation for a laser. As mentioned in the Introduction, it is the entanglement of the environment with the electron that contributes to the decoherence even though energy exchange is not allowed between the individual non-interacting parts, i.e. the electron and the electromagnetic field modes. The combination is a closed system and does evolve unitarily without decoherence, but the individual parts can remain in mixed states at the same time. In terms of photons, one can imagine exchange of virtual pairs of photons with the field by the electron along two different interfering paths. Such an interpretation is often misunderstood as dressing of an electron or an atom by vacuum fluctuations, and is often a source of confusing debate. There have been a few parallel developments surrounding the question–whether or not quantum fluctuations can cause decoherence. The role of vacuum fluctuations in decohering atomic coherence has been discussed recently. The decoherence of an electron due to its coupling to vacuum fluctuations has also been previously considered with an affirmative conclusion. There was another interesting development on the problem of a quantum limit of information processing pertaining to computation. Starting from a well known result from black hole entropy theory, a proposal was made suggesting quantum-limited information loss, quantified by entropy. This was severely criticized again with the argument that zero-point energy cannot be dissipated as “heat”. Though the debate was unresolved, since then it is known in the refined description of decoherence that a part of the entropy can reside in the correlation. The sum total of entropy of a system, bath and that contained in the correlation is equal to the entropy of the combination. This is a different way of saying that a pure state of the combination is consistent with partial mixed states. All these above-mentioned debates were not settled due to the lack of any experiments. Fortunately, our problem starts from experimental results. In conclusion, our experiments along with almost all existing experiments on the direct or indirect measurement of decoherence rate are more than suggestive of a non-thermal mechanism, which is in all probability intrinsic. Existence of field fluctuations at frequencies higher than the temperature, irrespective of their origin, can explain various discrepancies in mesoscopic physics. In this paper we briefly discussed how persistent current and electron-electron interaction correction may be affected by the saturation of decoherence rate. Following similar arguments, the experimentally observed formation of metallic states in 2D, lack of universal one-parameter scaling, suppression or saturation of strong localization, and suppression of quantum-hall-insulator transition can be understood. In mesoscopic physics alone, the fundamental role of low temperature behavior of electron decoherence cannot be overemphasized. I acknowledge experimental collaboration with R.A. Webb and E.M.Q. Jariwala, and the formal support of M.L. Roukes.
no-problem/9912/cond-mat9912150.html
ar5iv
text
# Untitled Document This paper has been withdrawn by the author. This paper has been withdrawn by the author.
no-problem/9912/math9912062.html
ar5iv
text
# §1 Introduction ## §1 Introduction Gromov introduced several notions of largeness of Riemannian manifolds. For example, a manifold $`X`$ which is a universal cover of a closed aspherical manifold $`M^n`$ with the fundamental group $`\mathrm{\Gamma }=\pi _1(M^n)`$, supplied with a $`\mathrm{\Gamma }`$-invariant metric is large in a sense that it is uniformly contractible. We recall that $`X`$ is uniformly contractible if there is a function $`S(r)`$ such that every ball $`B_r(x)`$ of the radius $`r`$ centered at $`x`$ is contractible to a point in the ball $`B_{S(r)}(x)`$ for any $`xX`$. For the purpose of the Novikov conjecture and other related conjectures, it is important to show that universal covers of aspherical manifolds are also large in some cohomological sense. The weakest such property is called hypersphericity. ###### Definition \[G-L\] An $`n`$-dimensional manifold $`X`$ is called hyperspherical if for every $`ϵ>0`$ there is a $`ϵ`$-contracting proper map $`f_ϵ:XS^n`$ of nonzero degree onto the standard unit $`n`$-sphere. Here a continuous map $`f:XS^n`$ is proper if it has only one unbounded preimage. Gromov and Lawson proved the following \[G-L\] ###### Gromov-Lawson's Theorem An aspherical manifold with a hyperspherical universal cover cannot carry a metric of a positive scalar curvature. The natural question appeared \[G2\]: ###### Problem 1 Is every uniformly contractible manifold hyperspherical? Above we defined a notion of the rational hypersphericity. One can define integral hypersphericity by taking maps of degree one. Also the $`n`$-sphere can be replaced by $`\text{R}^n`$. In that case we obtain the notion of a hypereuclidean manifold. In the integral case these two notions seems coincide. Also in the integral case the above question of Gromov has a negative answer \[D-F-W\]. Still there is no candidates for a rational counterexample and even new examples of Gromov \[G4\] leave a possibility for an affirmative answer. The following definition is due to Gromov \[G1\]: ###### Definition A map $`f:XY`$ between metric spaces is called large scale uniform embedding if there are two functions $`\rho _1,\rho _2:[0,\mathrm{})[0,\mathrm{})`$ tending to infinity such that $`\rho _1(d_X(x,x^{}))d_Y(f(x),f(x^{}))\rho _2(d_X(x,x^{}))`$. for all $`x,x^{}X`$. In Section 2 of this paper we prove the following: ###### Theorem 1 Suppose that a uniformly contractible manifold $`X`$ admits a large scaale uniform embedding into $`\text{R}^n`$. Then $`X\times \text{R}^n`$ is integrally hyperspherical. ###### Corollary Let $`M`$ be a closed aspherical manifold and assume that the fundamental group $`\mathrm{\Gamma }=\pi _1(M)`$ admits a coarsely uniform embedding into $`R^n`$ as a metric space with the word metric. Then $`M`$ cannot carry a metric with a positive scalar curvature. Some results of that type were already known to Gromov (see his remark (b), page 183 of \[G3\]). Also the Corollary follows from the theorems of Yu \[Yu1\],\[Yu2\]. ###### The First Yu's Theorem If a finitely presented group $`\mathrm{\Gamma }`$ has a finite asymptotic dimension, in particular if it is coarsely uniformly embeddable in $`\text{R}^n`$, then the coarse Baum-Connes conjecture holds for $`\mathrm{\Gamma }`$. ###### The Second Yu's Theorem If a finitely presented group $`\mathrm{\Gamma }`$ can be uniformly in large scale sense embedded into a Hilbert space, then the coarse Baum-Connes conjecture holds for $`\mathrm{\Gamma }`$. We note that The Second Yu’s theorem implies The First \[H-R2\]. Open Riemanian manifolds which obey the coarse Baum-Connes conjecture are of course large in some refined sense. This largeness is relevant to the hypersphericity or the hypereuclidianess but it is different. The best we can say \[Ro1\] that integrally (rationally) hypereuclidean manifolds satisfy the monicity part of the coarse (rational) Baum-Connes conjecture. We note that the monicity part of the coarse Baum-Connes is sufficient for the Gromov-Lawson conjecture about nonexistence of a positive scalar curvature metric on a closed aspherical Reimannian manifolds. Both Yu’s theorems hold for all proper metric spaces with bounded geometry. We recall that a metric space $`X`$ is called having a bounded geometry if for every $`ϵ>0`$ and every $`r>0`$ there is $`c`$ such that the $`ϵ`$-capacity of every $`r`$-ball $`B_r(x)`$ does not exceed $`c`$. The latter means that a ball $`B_r(x)`$ contains no more than $`c`$ $`ϵ`$-disjoint points. ###### Definition An $`n`$-dimensional manifold $`X`$ is called stably (integrally) hyperspherical if for any $`ϵ>0`$ there is $`m`$ such that $`X\times \text{R}^m`$ admits an $`ϵ`$-contracting map of nonzero degree (of degree one) onto the unit $`(n+m)`$-sphere $`S^{n+m}`$. We prove the following ###### Theorem 2 Suppose that $`X`$ is a uniformly contractible manifold with bounded geometry and assume that $`X`$ admits a coarsely uniform embedding into a Hilbert space. Then $`X`$ is stably integrally hyperspherical. It is unclear whether the stable hypersphericity implies the Gromov-Lawson conjecture. A positive answer to the following problem would give a simple argument for the implication: Stable Hypersphericity $``$ Gromov-Lawson. ###### Problem 2 Does there exist a positive constant $`c>0`$ such that the $`K`$-area of the unit $`2n`$-sphere is greater than $`c`$ for all $`n`$ ? We recall the definition of $`K`$-area from \[G3\]. For an even dimensional Riemannian manifold $`M`$ the $`K`$-area is $`Karea(M)=(inf_X\{R(X)\})^1`$ where the infimum is taken over all bundles $`X`$ over $`M`$ with some of the Chern numbers nonzero and with unitary connections, $`R(X)`$ is the curvature of $`X`$ equipped with the operator norm, i.e. $`A=sup_{x=1}Axx`$. The other way to establish the implication: Stable Hypersphericity $``$ Gromov-Lawson would be extending the notion of $`K`$-area on loop spaces and showing that the $`K`$-area of $`\mathrm{\Omega }^{\mathrm{}}\mathrm{\Sigma }^{\mathrm{}}S^n`$ is greater than zero. One of the objectives of this paper is to give elementary proofs of Yu’s theorems with replacing the coarse Baum-Connes conjecture by the hypersphericity. It is accomplished with the First Yu’s Theorem. In Section 3 we prove The Embedding Theorem for asymptotic dimension. Then a slightly more general version of Theorem 1 completes the proof. The Theorem 2 can be considered as a version of the second Yu’s Theorem. Neverertheless the same set of corollaries would follow from it in the case of positive answer to Problem 2. Otherwise to obtain a proper analog of the Second Yu’s theorem on this way one should do more elaborate differential geometry in the argument in order to avoid stabilizing with $`\text{R}^m`$. ## §2 Proofs of Theorems 1 and 2 First we note that in both theorems it suffices to consider the case when $`X`$ is isometrically embedded in $`\text{R}^n`$ ( or $`l_2`$). Since a contractible $`k`$-manifold $`X`$ is homeomorphic to $`\text{R}^{k+1}`$ after crossing with R, without loss of generality we may assume that $`X`$ is homeomorphic to $`\text{R}^k`$. Fix a homeomorphism $`h:\text{R}^kX`$. We denote by $`S_r^{k1}`$ the standard sphere in $`\text{R}^k`$ of radius $`r`$ with the center at $`0`$. Note that the family $`h(S_r^{k1})`$ tends to infinity in $`X`$ as $`r`$ approaches infinity. Denote by $`N_\lambda (A)`$ the $`\lambda `$-neighborhood of $`A`$ in an ambient space $`W`$. By $`B_r^m`$ we denote the standard $`r`$-ball in $`\text{R}^m`$. ###### Lemma 1 Let $`X`$ be a uniformly contractible manifold with bounded geometry, $`X`$ is homeomorphic to $`\text{R}^k`$ and $`X`$ is isometrically embedded in a metric space $`W`$. Then for any $`\lambda >0`$ there is $`r>0`$ such that the neighborhood $`N_\lambda (h(S_r^{k1}))`$ in $`W`$ admits a retraction onto $`h(S_r^{k1})`$. ###### Demonstration Proof Let $`S:\text{R}_+\text{R}_+`$ be a monotone contractibility function on $`X`$. Clearly, $`S(t)t`$. Since $`X`$ is a space of bounded geometry, there is a uniformly bounded cover $`𝒰`$ on $`X`$ of finite multiplicity $`m`$ and with the Lebesgue number $`>4\lambda `$ (see \[H-R1\] or \[Dr\]). If we squeeze every element $`U𝒰`$ by $`\lambda `$ we get a cover $`𝒰^{}`$ with the Lebesgue number $`3\lambda `$ For every $`U𝒰^{}`$ we consider the open $`\lambda `$-neighborhood $`ON_\lambda (U)`$ in $`W`$. Then the cover $`\stackrel{~}{𝒰}=\{ON_\lambda (U)U𝒰^{}\}`$ has the multiplicity $`m`$. Let $`d`$ be an upper bound for the diameters of elements of the cover $`\stackrel{~}{𝒰}`$. Define $`T(t)=2S(4t)`$. Let $`T^l`$ mean $`l`$ times iteration of $`T`$. We take $`r`$ such that $`d(h(0),h(S_r^{k1}))T^{m+k}(d)`$. Denote by $`\overline{𝒰}`$ the restriction of $`\stackrel{~}{𝒰}`$ over $`h(S_r^{k1})`$. Note that $`\overline{𝒰}`$ covers the neighborhood $`N_\lambda (h(S_r^{k1}))=N`$. Let $`\nu :NN(\overline{𝒰})`$ be a projection to the nerve of the cover $`\overline{𝒰}`$. We define a map $`\varphi :N(\overline{𝒰})X\{h(0)\}`$ such that the restriction $`\varphi \nu _{h(S_r^{k1})}`$ is homotopic to the identity map $`id_{h(S_r^{k1})}`$. Then by the Homotopy Extension Theorem there is an extension $`\beta :NX\{h(0)\}`$ of the identity map $`id_{h(S_r^{k1})}`$. Let $`\gamma :\text{R}^k\{0\}S_r^{k1}`$ be a retraction. We define a retraction $`\alpha :Nh(S_r^{k1})`$ as $`h\gamma h^1\beta `$. We define $`\varphi `$ on the $`l`$-skeleton of $`N(\overline{𝒰})`$ by induction on $`l`$ in such a way that the diameter of the image $`\varphi (\sigma ^l)`$ of every $`l`$-simplex $`\sigma ^l`$ does not exceed $`T^l(d)`$. To do that, we choose points $`x_UUh(S_r^{k1})`$ for every $`U\overline{𝒰}`$ and define $`\varphi (v_U)=x_U`$ for every vertex $`v_U`$ in $`N(\overline{𝒰})`$ corresponding to an open set $`U\overline{𝒰}`$. For every edge $`[v_U,v_U^{}]`$ we define $`\varphi `$ on it in such a way that $`diam\varphi ([v_U,v_U^{}])S(d(x_U,x_U^{}))S(2d)T(d)`$. Assume that $`\varphi `$ is defined on the $`l`$-skeleton with the property that $`diam\varphi (\sigma )T^l(d)`$ for all simplices $`\sigma `$. If the boundary is connected, then for arbitrary $`l+1`$-dimensional simplex $`\mathrm{\Delta }`$ the image of the boundary $`\varphi (\mathrm{\Delta })`$ has the diameter $`4T^l(d)`$. Then we can extend $`\varphi `$ over $`\mathrm{\Delta }`$ with the diameter $`\varphi (\mathrm{\Delta })2S(4T^l(d))=T(T^l(d))=T^{l+1}(d)`$. A map $`\varphi `$ constructed on this way has the property that $`d(x,\varphi \nu (x))2(d+T^m(d))T^{m+1}(d)`$. Consider a small (with mesh smaller than $`\frac{1}{4}T^{m+1}(d)`$ ) triangulation $`\tau `$ on $`h(S_r^{k1})`$ and the cellular complex $`h(S_r^{k1})\times I`$ defined by that structure. Using induction one can extend the map $`id_{h(S_r^{k1})\times \{0\}}\varphi \nu _{h(S_r^{k1})\times \{1\}}:h(S_r^{k1})\times \{0,1\}X\{h(0)\}`$ to a map $`H:h(S_r^{k1})\times IX\{0\}`$ with the diameter of the image of every $`i`$-dimensional cell less than $`T^{m+i+1}(d)`$. Then $`H(h(S_r^{k1}))N_{T^{m+k}(d)}(h(S_r^{k1}))X\{h(0)\}`$ by the choice of $`r`$. Thus, $`H`$ is a required homotopy.∎ ###### Lemma 2 Let $`M`$ be a closed smooth $`l`$-dimensional submanifold in the euclidean space $`\text{R}^n`$ with a trivial tubular $`ϵ`$-neighborhood $`N_ϵ(M)`$. Then for any number $`d>ϵ`$ there is a number $`\mu `$ such that the diagonal embedding $`j=(1_M\mathrm{\Delta }\mu 1_M):M\text{R}^n\times \text{R}^n`$ has a regular neighborhood $`N`$, $`\nu :Nj(M)`$ such that: ###### Demonstration Proof Let $`q:N_ϵ(M)B_ϵ^{nl}`$ be a trivialization of the tubular neighborhood $`N_ϵ(M)`$. Let $`\lambda `$ be its Lipschitz constant. Take $`\mu =\frac{\lambda d}{ϵ}`$. Also by $`\mu `$ we denote the map $`\mu :\text{R}^n\text{R}^n`$ which is a multiplication of vectors by $`\mu `$. We extend the embedding $`j:M\text{R}^n\times \text{R}^n`$ to the map $`\overline{j}:\text{R}^n\text{R}^n\times \text{R}^n`$ defined as $`\overline{j}(x)=(x,\mu x)`$. Thus, the map $`\overline{j}`$ is a homothetic transformation of $`\text{R}^n`$ to the space $`L=\{(x,\mu x)x\text{R}^n\}`$ with the homothety coefficient equal to $`\sqrt{1+\mu ^2}`$. Therefore $`j(M)`$ admits a trivial tubular $`\delta `$-neighborhood $`N_\delta ^{}`$ in $`L`$ with $`\delta =ϵ\sqrt{1+\mu ^2}`$. We define $`N`$ as the product $`N_\delta ^{}\times B_d^n`$ isometrically realized in $`L\times L^{}`$ where $`L^{}`$ is the orthogonal complement of $`L`$ in $`\text{R}^n\times \text{R}^n`$. We consider the map $`h_1=\frac{d}{ϵ}q\overline{j}^1_{N_\delta ^{}}:N_\delta ^{}B_d^{nl}`$. Note that $`\frac{d}{ϵ}\lambda (1/\sqrt{1+\mu ^2})`$ is a Lipschitz constant for $`h_1`$. Therefore $`1=\frac{d}{ϵ}\frac{\lambda }{\mu }`$ is also a Lipshitz constant for $`h_1`$. Hence the map $`\overline{h}=h_1\times id_{B_d^n}:NB_d^{nl}\times B_d^n`$ is a short map. Let $`p:B_d^{nl}\times B_d^nB_d^{2nd}`$ be the natural radial projection. We define $`h=p\overline{h}`$. Clearly, the condition 1) holds. Now let $`yN`$ be an arbitrary point, show that $`pr_1(y)N_{2d}(M)`$. First $`y`$ can be presented as $`j(x)+w_1+w_2`$ where $`xM`$, $`w_1B_\delta ^{nl}\nu _1^1(j(x))`$, $`\nu _1:N_\delta ^{}j(M)`$ is the natural projection, and $`w_2B_d^nL^{}`$. Then $`pr_1(j(x)+w_1+w_2)=pr_1(j(x))+pr_1(w_1)+pr_1(w_2)=x+u+pr_1w_2`$ where $`w_1=\overline{j}(u)=(u,\mu u)`$ and $`u`$ is a normal vector to $`M`$ at point $`x`$ of the length $`ϵ`$. Then $`pr_1(y)x=u+pr_1(w_2)u+w_2ϵ+d2d`$. Hence $`pr_1(y)N_{2d}(M)`$.∎ ###### Demonstration Proof of Theorem 1 Let $`dimX=k`$. For every $`d>0`$ we construct a submanifold $`VX\times \text{R}^n`$ and a short map of degree one $`f:(V,V)(B_d^{k+n},B_d^{k+n})`$. Clearly, this would imply the integral hypersphericity of $`X\times \text{R}^n`$. By Lemma 1 for large enough $`r`$ there is a retraction of the $`2d`$-neighborhood $`N_{2d}(h(S_r^{k1}))`$ onto a curved $`(k1)`$-sphere $`h(S_r^{k1})`$. Let $`\alpha `$ be the retraction. We may require that $`\alpha `$ as well as $`h`$ are smooth maps. We assume that $`\text{R}^nS^n`$ is compactified to the $`n`$-sphere and assume that $`f:S^nB_r^k`$ is a smooth extension of $`h^1\alpha `$. Let $`x_0`$ be a regular value of $`f`$ and assume that $`f(\mathrm{})x_0`$. Then the fiber $`M=f^1(x_0)`$ is a closed $`(nk)`$-dimensional manifold which admits a trivial tubular neighborhood $`N_ϵ(M)`$ for some $`ϵ>0`$. We may assume that $`ϵ<d`$. Then we apply Lemma 2 to obtain an embedding $`j:M\text{R}^n\times \text{R}^n`$ with a regular neighborhood $`N`$ with the properties (1)-(2) of the lemma. In view of condition (2) of Lemma 2 for large enough $`R`$ the neighborhood $`N`$ is contained in $`N_{2d}(N)\times B_R^n`$. Hence the boundary $`(h(B_r^k)\times B_R^n)=h(S_r^{k1})\times B_R^nh(B_r^k)\times B_R^n`$ does not intersect $`N`$. Note that the manifold $`M`$ is linked in $`\text{R}^n`$ with $`h(S_r^{k1})`$ with the linking number one. Also $`M`$ is linked with $`(h(B_r^k)\times B_R^n)`$ with the linking number one. Since $`j(M)`$ is homotopic to $`M`$ inside $`N_{2d}(M)\times Int(B_R^n)`$, it follows that $`j(M)`$ is linked with $`(h(B_r^k)\times B_R^n)`$ with the linking number one. Consider the intersection $`V=X\times \text{R}^nN`$. We may assume that $`V`$ is a manifold with boundary. Since $`V`$ is homologous to $`(h(B_r^k)\times B_R^n)`$ in $`(\text{R}^n\times \text{R}^n)N`$, the linking number of $`V`$ and $`j(M)`$ is one. Since the intersection number of $`V`$ and $`j(M)`$ is one, the restriction $`h_V:(V,V)(B_d^{n+k},B_d^{n+k})`$ of a short map $`h:NB_d^{n+k}`$ from Lemma 2 has degree one.∎ ###### Demonstration Proof of Theorem 2 Let $`dimX=k`$. Let $`K`$ be a triangulation on $`X`$ with diameters of simplices $`1`$. We change the original embedding of $`X`$ into $`l_2`$ to the piecewise linear which is the same for vertices. We present $`X`$ as a union of finite subcomplexes: $`X=K_i`$, $`K_iK_{i+1}`$. Then every complex $`K_i`$ lies in a finite dimensional euclidean space $`\text{R}_i^nl_2`$. Let $`d`$ be given. As in the prove of Theorem 1 we construct a submanifold $`VX\times \text{R}^{n(d)}`$ and a short map of degree one $`f:(V,V)(B_d^{k+n(d)},B_d^{k+n(d)})`$. By Lemma 1 for large enough $`r`$ there is a retraction $`\alpha `$ of the $`2d`$-neighborhood $`N_{2d}(h(S_r^{k1}))`$ in $`l_2`$ onto $`h(S_r^{k1})`$. There is $`i`$ such that $`h(S_r^{k1})K_i`$. Then we can work in $`\text{R}^{n_i}`$ as in the prove of Theorem 1 and we get $`n(d)=n_i`$. ∎ REMARK. If we replace in the above argument the Hilbert space $`l_2`$ by the Banach space $`l_{\mathrm{}}`$ we will obtain the following condition on $`X`$: For every $`ϵ>0`$ there exist $`m`$ and a submanifold with boundary $`WX\times \text{R}^m`$ which admits an $`ϵ`$-contracting map onto the $`l_{\mathrm{}}`$ unit ball $`f:(W,W)(B_{\mathrm{}}^{n+m},B_{\mathrm{}}^{n+m})`$ of degree one. It is unclear if it would be possible to get a stable hypersphericity of $`X`$ from this (see \[G4\], page 8). ## §3 Embedding Theorem for asymptotic dimension We recall that the asymptotic dimension $`asdimX`$ of a metric space $`X`$ is a minimal number $`n`$, if exists, such that for any $`d>0`$ there is a uniformly bounded cover $`𝒰`$ of $`X`$ which consists of $`n+1`$ $`d`$-disjoint families $`𝒰=𝒰^0\mathrm{}𝒰^n`$. A family of sets $`𝒱`$ is $`d`$-disjoint if $`d(V,V^{})=inf\{d(x,x^{})xV,x^{}V^{}\}>d`$. By $`N_r(A)`$ we denote the $`r`$-neigborhood if $`r>0`$ and the set $`AN_r(XA)`$ if $`r0`$. Let $`mesh𝒰`$ denote an upper bound for diameters of elements of a cover $`𝒰`$ and let $`L(𝒰)`$ denote the Lebesgue number of $`𝒰`$. ###### Proposition 1 If a metric space $`X`$ with a base point $`x_0`$ has $`asdim(X)n`$ then there is a sequence of uniformly bounded open covers $`𝒰_k`$ of $`X`$, each cover $`𝒰_k`$ splits into a collection of $`n+1`$ $`d_k`$-disjoint families $`𝒰_k=𝒰_k^0\mathrm{}𝒰_k^n`$ such that ###### Demonstration Proof We construct it by induction on $`k`$. We start with a cover $`𝒰_0`$ with $`d_0>2`$ and enumerate the partition $`𝒰_0=𝒰_0^0\mathrm{}𝒰_0^n`$ in such a way that $`d(x_0,XU)>d_0`$ for some $`U𝒰_k^0`$. We formulate the condition (3) in a concrete fashion: $`(3)^{}`$ For $`l=m(n+1)+i`$ where $`i=l`$ mod $`n+1`$ there is $`U𝒰_l^i`$ such that $`N_{d_k}(U)B_m(x_0)`$. Assume that the family $`\{𝒰_k\}`$ is constructed for all $`kl`$ such that the conditions $`(1),(2),(3)^{},(4)`$ hold. We define $`d_{l+1}=2^{l+2}m_l`$ and consider a uniformly bounded cover $`\overline{𝒰}_{l+1}`$ with the Lebesgue number $`L(\overline{𝒰}_{l+1})>2d_{l+1}`$ and with splitting in $`n+1`$ $`d_{l+1}`$-disjoint families $`\overline{𝒰}_{l+1}=\overline{𝒰}_{l+1}^0\mathrm{}\overline{𝒰}_{l+1}^n`$. Then we may assume that for all elements $`U\overline{𝒰}_{l+1}`$ we have $`N_{2d_{l+1}}(U)\mathrm{}`$. We just can delete all elements $`U`$ from the cover $`\overline{𝒰}_{l+1}`$ which do not satisfy that property, and since $`L(\overline{𝒰}_{l+1})>2d_{l+1}`$, still we will have a cover of $`X`$. We enumerate families $`\overline{𝒰}_{l+1}^0\mathrm{}\overline{𝒰}_{l+1}^n`$ in such a way that $`d(x_0,XU)>2d_{l+1}`$ for some $`U\overline{𝒰}_{l+1}^i`$ for $`i=l+1`$ mod $`n+1`$. For every $`U\overline{𝒰}_{l+1}^i`$ we define $`\stackrel{~}{U}=U_{VU;V𝒰_k^i,kl}\overline{N_4(V)}`$. We define $`𝒰_{l+1}^i=\{\stackrel{~}{U}U\overline{𝒰}_{l+1}^i\}`$. Next we check all the properties. (1). Take a point $`xX`$, then there exists $`U\overline{𝒰}_{l+1}`$ such that $`B_{2d_{l+1}}(x)U`$. Then $`B_{d_{l+1}}(x)UN_{d_{l+1}}(XU)UN_{m_l+4}(XU)\stackrel{~}{U}`$. Note that $`N_{d_{l+1}}(\stackrel{~}{U})=\stackrel{~}{U}N_{d_{l+1}}(XU)UN_{d_{l+1}+m_l+4}(XU)N_{2d_{l+1}}(U)=N_{2d_{l+1}}(U)\mathrm{}`$. (2). This condition holds by the definition. $`(3)^{}`$. This condition holds by the construction. (4). Assume that $`V𝒰_k^i`$, $`kl`$ and $`V\stackrel{~}{U}`$. If $`VU`$, then $`d(V,\stackrel{~}{U})4`$. Assume that $`VU`$. Since $`V\stackrel{~}{U}`$, there exists $`V^{}U`$ and $`V^{}𝒰_s^i`$ for some $`sl`$ such that $`VN_4(V^{})\mathrm{}`$. Hence $`d(V,V^{})<4`$. By the condition (4) and induction assumption we have either $`VV^{}`$ or $`V^{}V`$. In the first case it follows that $`d(V,\stackrel{~}{U})d(V^{},\stackrel{~}{U})4`$. The second case is impossible, since $`V^{}VU`$ contradicts to the fact that $`V^{}U`$. ∎ ###### Theorem 3 Asume that $`X`$ is a metric space of bounded geometry with $`asdim(X)n`$. Then $`X`$ can be uniformly embedded in a coarse sense in the product of $`n+1`$ locally finite trees. ###### Demonstration Proof Let $`𝒰_k`$ be a sequence of covers of $`X`$ from Proposition 1. Let $`𝒱_i=_k𝒰_k^i`$. We define a map $`\psi :𝒱_i𝒱_i`$ by the following rule: $`\psi (U)`$ is the smallest $`V𝒱_i`$ with respect to the inclusion such that $`VU`$ and $`UV`$. The conditions 3) and 4) of Proposition 1 and disjointness of $`𝒰_k^i`$ for all $`k`$ imply that $`\psi `$ is well-defined. For every $`i\{0,\mathrm{},n\}`$ construct an oriented graph $`T^i`$ as follows. For every $`U𝒱_i`$ we consider an interval $`I_U`$ isometric with $`[0,2^k]`$ and oriented from $`2^k`$ to $`0`$. For every $`V\psi ^1(U)`$ we attach $`I_V`$ by the $`0`$-end to an integer point $`a_V=\mathrm{min}\{2^k,[sup\varphi _U(V)\frac{2^k}{d_k}]\}`$ of $`I_U`$ where $`\varphi _U(x)=d(x,XU)`$ and $`[a]`$ means the integer part of $`a`$. We show that the graph $`T^i`$ is a locally finite tree. For every $`U,V𝒱_i`$ by the property 3) there exists $`W𝒱_i`$ such that $`UVW`$. This implies the connectedness of $`T^i`$. Since the orientation on $`T^i`$ defines a flow, i.e. every vertex is an initial point only for one arrow, it follows that every cycle in $`T^i`$ must be oriented. Oriented cycles in $`T^i`$ do not exist due to the inclusion nature of the orientation. Thus, $`T^i`$ is a tree. Note that for every nonzero vertex in $`I_U`$ only finitely many intervals are attached to it. It implies that if a vertex $`v`$ in $`T^i`$ is of infinite order, then $`v`$ must be the $`0`$-vertex for all intervals involved. So a vertex $`v`$ of infinite order defines an infinite sequence $`U_1U_2\mathrm{}U_m`$ with $`\psi (U_j)=U_{j+1}`$ and $`a_{U_j}=0`$ for all $`j`$. Let $`U_j𝒰_{k_j}^i`$, then $`\frac{2^{k_j}}{d_{k_j}}d(x,XU_{j+1})<1`$ for all $`xU_j`$. Hence $`d(x,XU_{j+1})<d_{k_j}`$ for all $`xU_j`$, i.e. $`U_jN_{d_{k_j}}(U_{i+1})=\mathrm{}`$. By the property 3) from Proposition 1 there is $`U𝒰_l^i`$ with $`N_{d_l}(U)U_1`$. Then the condition 4) implies that $`U=U_j`$ for some $`j`$ whence $`l=k_j`$. Therefore $`U_1N_{d_l}(U)=\mathrm{}`$. Contradiction. Next we define a map $`p_i:XT^i`$. By the condition 3) every point $`xX`$ is covered by some element $`U𝒰_k^i`$ for some $`k`$. Let $`U`$ containing $`x`$ be taken with the smallest $`k`$. We define $`p_i(x)I_U`$ as follows. Consider a map $`\xi :\overline{N}_{d_k}(U)U_{V\psi ^1(U)}VI_U=[0,2^k]`$ defined as $`\xi (\overline{N}_{d_k}(U))=2^k`$, $`\xi (U)=0`$ and $`\xi (V)=a_v`$. Show that $`\xi `$ is a short map. Let $`yV`$ and $`zV^{}`$, $`V,V^{}\psi ^1(U)`$ and $`VV^{}`$. Then by the condition 4) $`d(y,z)4`$. Note that $`|a_Va_V^{}|\frac{2^k}{d_k}|d(y,XU)+m_{k1}d(z,XU)|+2\frac{2^k}{d_k}d(y,z)+\frac{2^k}{d_k}m_{k1}+2\frac{1}{4}d(y,z)+3d(y,z)`$. Here we applied the condition 2). Now let $`yV`$ and $`zU`$. Then $`|\xi (y)\xi (z)|=a_V\frac{2^k}{d_k}(d(y,XU)+m_{k1})\frac{1}{2m_{k1}}d(y,z)+\frac{1}{2}d(y,z)`$ provided $`d(y,z)>1`$. Otherwise $`a_v<1`$ and hence $`a_v=0`$. The case when $`yU`$ and $`z\overline{N}_{d_k}(U)`$ is obvious. If $`yV`$ and $`z\overline{N}_{d_k}(U)`$ then $`|\xi (z)\xi (y)|2^k\frac{2^k}{d_k}(d(y,XU)m_{k1})+1\frac{2^k}{d_k}(d_kd(y,XU))+2\frac{2^k}{d_k}(d(z,XU)d(y,XU))+2d(y,z)`$. There exists a short extension $`\overline{\xi }_UI_U`$ of the map $`\xi `$. We define $`p_i(x)=\overline{\xi }_U(x)`$. It easy to see that the map $`p_i`$ is short. Show that the diagonal product $`p=\mathrm{\Delta }p_i:XT^i`$ is a uniform embedding. We consider $`l_1`$-metric on $`T^i`$. Since each $`p_i`$ is short the map $`p`$ is Lipschitz. Clearly, $`Dist(p(x),p(x^{}))\rho (d(x,x^{}))`$ for the function $`\rho (t)=inf\{Dist(p(x),p(x^{}))d(x,x^{})t\}`$. Assume that $`\rho `$ is bounded from above. Then there is a sequence of pairs $`(x_k,x_k^{})`$ of points with $`d(x_k,x_k^{})>m_k`$ and with $`Dist(p(x_k),p(x_k^{}))b`$ for all $`k`$. For any $`k`$ there is an element $`U𝒰_k^i`$ such that $`d(x_k,XU)>d_k`$. Since $`d(x_k,x_k^{})>m_k`$, it follows that $`x_k^{}U`$. Note that $`\overline{\xi }_U(x_k)=2^k`$ and hence the distance between $`p_i(x_k)`$ and $`p_i(x_k^{})`$ in the graph $`T^i`$ is greater than $`2^k`$. Therefore, $`Dist(p(x_k),p(x_k^{}))2^k`$. Therefore $`\rho `$ tends to infinity and $`p`$ is a uniform embedding. ∎ ###### Corollary Every metric space $`X`$ with $`asdim(X)n`$ is coarsely isomorphic to a space $`Y`$ of a linear type. First we recall that according to Higson an asymptotically finite dimensional space $`Y`$ has a linear type if in the above definition of $`asdim`$ there exists a constant $`C`$ such that for any $`d`$ the number $`Cd`$ is an upper bound on the size of the cover $`𝒰`$. ###### Demonstration Proof Note that a tree and a finite product of trees are of linear type. Hence every subspace of a finite product of trees is of linear type. By Theorem 3 every asymptotically finite dimensional space $`X`$ is coarsely isomorphic to a subset of a finite product of trees. ###### Lemma 3 Every locally finite tree is uniformly embeddable in a complete simply connected 2-dimensional manifold $`K(M)`$ with negative curvature: $`k_1K(M)k_2<0`$. ###### Demonstration Proof Embed the tree into a plane. For every vertex $`v`$ which is not the end point we consider the longest right-hand rule and left-hand rule paths from $`v`$. If one of the paths is finite, then we end up in an end point of the tree. We attach a hyperbolic half-plane $`H_v`$ by isometry between the union of these paths and an interval in $`H_v`$. For every end point vertex $`v`$ we will get two half-planes $`H_{v_1}`$ and $`H_{v_2}`$ with corresponding intervals ended in $`v`$. We attach them by isometries of corresponding rays in $`H_{v_i}`$. As the result we will get a plane with a piecewise hyperbolic metric on it with possible singularities only at vertices of the tree. We can approximate this metric by a smooth metric of strictly negative curvature ###### Theorem 4 Every metric space $`X`$ with $`asdim(X)n`$ is uniformly embeddable in a $`(2n+2)`$-dimensional non-positively curved manifold $`W`$ with $`asdim(W)=2n+2`$. ###### Demonstration Proof We apply Theorem 3 and Lemma 3 to obtain an embedding of $`X`$ into $`W=M_i`$ where each $`M_i`$ is 2-dimensional negatively curved manifold. Then $`W`$ is non-positively curved. By a theorem of Gromov \[G1\] the asymptotic dimension of $`M_i`$ equals 2. Hence the asymptotic dimension of $`W`$ is $`2(n+1)`$ \[D-J\] ∎ ###### Problem 3 Can $`2n+2`$ in the above theorem be improved to $`2n+1`$? Perhpas it is natural to ask whether every asymptotically $`n`$-dimensional metric space of bounded geometry is embeddable into $`(\text{H}^2)^{n+1}`$, the product of $`n+1`$ copies of the hyperbolic plane. ## §4 On the First Theorem of Yu The following lemma is a generalization of Lemma 2. ###### Lemma 4 Let $`M`$ be a closed $`l`$-dimensional manifold smoothly embedded in a non-positively curved $`n`$-manifold $`W`$ with a trivial tubular $`ϵ`$-neighborhood $`N_ϵ(M)`$. Then for any $`d>0`$ there is an embedding $`\gamma :N_ϵ(M)\text{R}^n`$ such that the diagonal embedding $`j=(1\mathrm{\Delta }\gamma ):MW\times \text{R}^n`$ has a regular neighborhood $`N`$ with the projection $`\nu :Nj(M)`$ such that: This Lemma together with the above Embedding Theorem allows to prove the following: ###### Theorem 5 If a uniformly contractible manifold $`X`$ has a finite asymptotic dimension, then there is $`n`$ such that $`X\times \text{R}^n`$ is integrally hyperspherical. The proof is exactly the same as in Theorem 1. ###### Demonstration Proof of Lemma 4 Let $`TW`$ be the tangent bundle of a non-positively curved complete simply connected $`n`$-dimensional Riemannian manifold $`W`$. For every $`xW`$ there is the exponential map $`e_x:\text{R}^nW`$ which takes a vector $`v`$ to a point $`y=e_x(v)`$ on the geodesic ray in the direction of $`v`$ with $`d_W(x,y)=v`$. Note that $`e_x`$ is a homeomorphism for every $`x`$. The visual sphere at infinity together with the exponential maps define a trivialization of $`TW`$. A tubular $`ϵ`$-neighborhood of a smmoth submanifold $`M^lW^n`$ is a neighborhood $`N_ϵ(M)`$ with the projection $`p:N_ϵ(M)M`$ such that $`p^1(x)=e_x(B_ϵ^{nl})`$ where $`B_ϵ^{nl}N_x(M^l)T_xW`$ is an euclidean $`ϵ`$-ball lying in the normal direction. Using the embedding $`e_{x_0}^1_{N_ϵ(M)}:N_ϵ(M)\text{R}^n`$ we define $`\gamma :N_ϵ(M)\text{R}^n`$ to be a $`\mu `$-expanding map where $`\mu `$ is a large number defined as follows. By the definition of a tubular neighborhhod there is a lift $`ϵ^1:N_ϵ(M)M\times B_ϵ^{nl}TW`$. Let $`q:e^1(N_ϵ(M))B_ϵ^{nl}`$ be a trivialization. Let $`\lambda `$ be a Lipschitz constant for the map $`qe^1`$. Note that $`e^1(y)=(x,e_x^1(y))T_xW`$ for $`yp^1(x)`$. Let $`N_{2d}(M)`$ be a closed $`2d`$-neighborhood of $`M`$. The correspondence $`xe_x^1_{N_{2d}(M)}`$ defines a map $`\mathrm{\Phi }:MC(N_{2d}(M),\text{R}^n)`$, where the functional space $`C(N_{2d}(M),\text{R}^n)`$ is supplied with the sup norm $`f=sup_{zN_{2d}(M)}f(z)`$. Let $`\stackrel{~}{\lambda }>1`$ be a Lipschitz constant of $`\mathrm{\Phi }`$. We take $`\mu =\lambda \stackrel{~}{\lambda }d/ϵ`$. We define $`N=_{xM}\overline{B}_{2d}(x)\times \gamma (\overline{B}_ϵ(x))`$. Here we use $`\overline{B}_r`$ to denote a ball of radius $`r`$ in $`W`$ and $`B_r`$ for a ball in $`\text{R}^n`$. We define short maps $`h_1:NB_d^n`$ and $`h_2:NB_d^{nl}`$ such that the sum $`(h_1+h_2):NB_d^n\times B_d^{nl}`$ satisfies the condition (1). We define $`h_1_{B_{2d}(x)\times \{y\}}=\frac{1}{2}e_x^1_{B_{2d}(x)}`$ for every $`xM`$ and every $`y\gamma (B_ϵ(x))`$. Thus, $`h_1((u,\gamma (v))_x)=\frac{1}{2}e_x^1(u)`$ for $`uB_{2d}(x)`$ and $`vB_ϵ(x)`$. We define $`h_2=\frac{d}{ϵ}qe^1\gamma ^1pr_2`$. Then we can estimate a Lipschitz constant for $`h_2`$ as the product $`\frac{d}{ϵ}\lambda \mu ^1=\frac{1}{\stackrel{~}{\lambda }}1`$. Let $`z=(u,\gamma (v))_x`$ and $`z^{}=(u^{},\gamma (v^{}))_x^{}`$ be two points in $`N`$. Then $`h_1(z)h_1(z^{})=\frac{1}{2}e_x^1(u)e_x^{}^1(u^{})\frac{1}{2}e_x^1(u)e_x^{}^1(u)+\frac{1}{2}e_x^{}^1(u)e_x^{}^1(u^{})\frac{1}{2}\stackrel{~}{\lambda }d(x,x^{})+\frac{1}{2}d(u,u^{})\frac{1}{2}(\mu d(x,x^{})+d(u,u^{}))\frac{\sqrt{2}}{2}\sqrt{d^2(u,u^{})+d^2(\gamma (v),\gamma (v^{}))}d(z,z^{})`$. Then we define $`h`$ as the composition of $`h_1+h_2`$ and the natural projection of $`B_d^n\times B_d^{nl}`$ onto $`B_d^{2nl}`$. The condition (1) holds. The condition (2) holds by the definition of $`N`$. ∎
no-problem/9912/astro-ph9912404.html
ar5iv
text
# (Teff,log g,[Fe/H]) Classification of Low-Resolution Stellar Spectra using Artificial Neural Networks ## 1. Introduction Artificial Neural Networks (ANNs) have been applied to the classification of stellar spectra very recently, but with great success. These computational systems provide a mapping from a set of inputs to a set of desired outputs and can be trained to classify anything with great accuracy and speed. Vieira & Ponz (1995) made use of this technique to carry out the spectral classification of low-resolution spectra obtained by the IUE satellite. They found that ANNs performed better than classical methods based on a defined metric distance, making it possible an accuracy of 1.1 spectral subclasses. More recently, Bailer-Jones et al. (1998) trained an ANN to classify objective-prism spectra from the Michigan Spectral Survey, extracting the spectral type (std. deviation = 1.09) and the luminosity class (success rate $`>`$ 95%). We have stepped forward from the two-dimensional (temperature and luminosity class) classification to the three-dimensional, including the stellar metal content. We have made use of part of the observational material collected by Beers and his large collaborative projects (Beers et al. 1999). Table 1 summarizes the main characteristics of the spectra and the acquisition places. ## 2. Input data A selection of 182 stars spanning all metallicites, gravities and effective temperatures (Teffs) was selected for training. ANNs can over-learn, that is, they may get to the level of taking into account features that are particular to the stars in the training sample, rather than typical characteristics of the spectral classes, gravities, and metallicities they represent. For this reason, an independent sample must be used to check that the net is properly classifying the stars. 82 stars were used for this purpose. Figure 1 shows the distribution of the metallicity of the training and testing samples. The training and testing samples were selected to make sure that the mapping of the Teff-logg was adequate as well, as demonstrates Figure 2. Metallicities for the testing and training samples were compiled by Beers et al. (1999). Effective temperatures have been derived from compiled B-V colors, applying the calibrations of Alonso et al. (1996) for dwarfs and subgiants, and Alonso et al. (1999, private communication) for more evolved stars. These calibrations are based on the InfraRed Flux Method, developed by Blackwell and collaborators (e.g., Blackwell & Lynas-Gray 1994). We have taken advantage of the distance estimates made by Beers et al. (1999) (spectroscopic parallaxes) to interpolate in the evolutionary isochrones of Bergbush & VandenBerg (1992) and derive bolometric corrections and masses. The calculated luminosities were combined with the effective temperatures to obtain the stellar radii, and then with the masses to estimate the gravities. ## 3. Spectra processing Before entering the ANN, the spectra were pre-processed using IRAF routines. They were first continuum flattened, using a 3rd order spline interpolation method. Then, the spectra were shifted to a pre-chosen template velocity by the “Fxcor” and “Dopcor” packages and rebinned to a common dispersion (0.646 Å/pix). Finally, using “Wspectext” , the spectra were converted into text format. ## 4. Applying the neural network All weights are initially random. A node fires at a value given by a sigmoid function, F(a) = 1/(1+e-a), where a = $`\mathrm{\Sigma }`$ (wij x Ii),where wij is the corresponding weight and Ii the corresponding input. Then F(a) = Oj, the hidden (or output) node value. The weight training is accomplished by means of the Ripley code (Ripley 1993), a quasi-Newtonian optimization method. Besides the initial random weights, Ripley’s code eliminates all free parameters that are present in most back propagation networks (e.g. learning rate, momentum term). Artificial Neural Networks of 3, 5, 7, and sometimes 9 and 11 hidden nodes were tried with varying random weight initializations. The net architecture finally used is 1 hidden layer and 5 hidden nodes, which produced the most reasonable results based on overdetermination as well as reliability and time constraints. A typical training session involved about 1000 iterations in perhaps 30 minutes on a Sun ultra 30. This implies a testing time of much less than 1 second per spectrum. We chose a final spectral range of 3630 to 4890 Å before running the ANN to ensure the best spectral quality possible. At 0.646 Å/pix, this yields 1952 spectral resolution elements, i.e. input values, per spectrum. ## 5. Results and conclusions Our results are displayed in Figure 3. Several stars in the training sample were indicated by the net as problematic. A close look to those outliers revealed that an important number of them corresponded to obvious errors in the application of the spectroscopic parallax technique. They have been excluded from the comparison shown here, and will be included, with the corrected stellar parameters, in future ANN training runs. Other outliers for which no obvious explanation was found, have been kept in the comparison. The performance of the trained ANNs can be graphically seen in the following graphs, and is summarized in the Table 2, where the rms differences between the known parameters and those provided by the net are displayed. The information for the training sample provides a glimpse on how well the ANN is learning. The few outliers mainly represent unusual spectra which are either: a) under-represented by the training set, or b) have poor quality spectra and/or have been unreasonably continuum flattened. We expect future ANN runs to provide better results, after correcting errors that have been already identified, and others yet to be investigated, in the parameters adopted for at least some of the outliers. ### Acknowledgments. This work has been partially funded by the U.S. NSF (grant AST961814), and the Robert A. Welch Foundation of Houston, Texas. ## References Alonso, A., Arribas, S., & Martínez Roger, C. 1996, A&A, 313, 873 Bailer-Jones, C. A. L. 1997, PASP, 109, 932 (PhD thesis abstract) Bailer-Jones, C. A. L., Irwin, M., & von Hippel, T. 1998, MNRAS, 298, 361 Beers, T. C., Rossi, S., Norris, J. E., Ryan, S. G., & Shefler, T. 1999, AJ, 117, 981 Bergbusch, P. A., & Vandenberg, D. A. 1992, ApJS, 81, 163 Blackwell, D. E., & Lynas-Gray, A. E. 1994, A&A, 282, 899 Vieira, E. F., & Ponz, J. D. 1995, A&AS, 111, 393 von Hippel, T., Storrie-Lombardi, L. J., Storrie-Lombardi, M. C., & Irwin, M. J. 1994, MNRAS, 269, 97
no-problem/9912/astro-ph9912423.html
ar5iv
text
# Near-infrared detection and optical follow-up of the GRB990705 afterglowBased on observations collected at the European Southern Observatory, La Silla and Paranal, Chile,Based on observations collected with NOAO facilities ## 1 Introduction Multiwavelength observations of Gamma–Ray Burst (GRB) afterglows are of crucial importance for understanding and constraining the active emission mechanisms (Wijers et al. 1997; Galama et al. 1998, Wijers & Galama 1999, Masetti et al. 1999). Optical and near-infrared (NIR) data carry the richest and most detailed information. In particular, since the GRB counterparts might heavily suffer from dust obscuration within the host galaxy, the NIR data, less affected by this extinction, are more effective than the optical ones for the study of the counterpart itself and of the circumburst medium, and, ultimately, in determining the nature of the GRB progenitors (see e.g. Dai & Lu 1999). GRB990705 (Celidonio et al. 1999) was detected by the Gamma-Ray Burst Monitor (GRBM; Frontera et al. 1997, Amati et al. 1997, Feroci et al. 1997) onboard BeppoSAX (Boella et al. 1997) on 1999 July 5.66765 UT and promptly localized with a 3$`\mathrm{}`$ accuracy by Unit 2 of the BeppoSAX Wide Field Cameras (WFC; Jager et al. 1997). This GRB lasted about 45 s in the GRBM 40–700 keV band, in which it reached a $`\gamma `$–ray peak flux of (3.7 $`\pm `$ 0.1)$`\times `$10<sup>-6</sup> erg cm<sup>-2</sup> s<sup>-1</sup> and showed a complex and multi-peaked structure. The WFC (2–26 keV) data indicate that GRB990705 had a similar duration and lightcurve in the X–rays, and that it displayed very bright X–ray emission with a peak intensity of about 4 Crab. A detailed presentation and description of the prompt event will be given by Amati et al. (in preparation). A BeppoSAX X–ray follow-up of GRB990705 started 11 hours after the GRBM trigger (Gandolfi 1999; Amati et al. 1999). The detection of this GRB by Ulysses (Hurley & Feroci 1999) and NEAR (Hurley et al. 1999) determined two annuli intersecting the BeppoSAX WFC error circle. This allowed the reduction of the error box to $``$3.5 square arcmin. Radio observations carried out with ATCA (Subrahmanyan et al. 1999) detected three radio sources in the WFC error circle. However, none of them lies inside the intersection of the Ulysses, NEAR and BeppoSAX error boxes (Hurley et al. 1999). Optical and near-infrared (NIR) observations were immediately activated at telescopes in the southern hemisphere to search for a counterpart at these wavelengths. The early imaging of the 3$`\mathrm{}`$ radius WFC error circle at the ESO-NTT with the SOFI camera allowed us to detect a bright NIR transient (Palazzi et al. 1999) inside the Ulysses, NEAR and BeppoSAX error boxes intersection. In this paper we report on the discovery and follow-up observations of the NIR and Optical Transients (NIRT and OT, respectively) associated with GRB990705. In Sect. 2 we describe the data acquisition and reduction, while in Sect. 3 we report the results, which are then discussed in Sect. 4. ## 2 Observations and data reduction ### 2.1 Near-infrared data The NIR imaging started 6.6 hours after the high-energy event: $`H`$-band images were acquired on 1999 July 5.9, 6.4 and 6.9 at La Silla (Chile) with the 3.58-meter ESO-NTT plus SOFI (see the observation log in Table 1). The camera is equipped with a Hawaii 1024$`\times `$1024 pixel HgCdTe detector, with a plate scale of 0$`\stackrel{}{.}`$29 pixel<sup>-1</sup> and a field of view of roughly 4$`\stackrel{}{.}`$9$`\times `$4$`\stackrel{}{.}`$9. Images are composed of a number of elementary coadded frames acquired by dithering the telescope by several arcsecs every 60 s. Reduction of the images was performed with IRAF and the STSDAS packages<sup>1</sup><sup>1</sup>1IRAF is the Image Analysis and Reduction Facility made available to the astronomical community by the National Optical Astronomy Observatories, which are operated by AURA, Inc., under contract with the U.S. National Science Foundation. STSDAS is distributed by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy (AURA), Inc., under NASA contract NAS 5–26555.. Each image was reduced by first subtracting a mean sky, obtained from frames just before and after the source image. Then, a differential dome flatfield correction was applied, and the frames were registered to fractional pixels and combined. Before frames were used for sky subtraction, stars in them were eliminated by a background interpolation algorithm (imedit) combined with an automatic “star finder” (daofind). We calibrated the photometry with stars selected from the NICMOS Standard List (Persson et al. 1998). The stars were observed in five positions on the detector, and were reduced in the same way as the source observations. Formal photometric accuracy based only on the standard star observations is typically better than 3%. The source photometry was corrected for atmospheric extinction using the mean ESO $`H`$-band extinction coefficient of 0.06 (Engels et al. 1981). $`L`$-band (3.205-3.823 $`\mu `$m) observations were also carried out in Antarctica on July 7.6 with the SPIREX 0.6-meter telescope plus the NOAO ABU IR camera with a 0$`\stackrel{}{.}`$6 pixel<sup>-1</sup> plate scale. ABU houses a 1024$`\times `$1024 pixel ALADDIN InSb array operating at 36 K using a closed cycle helium refrigeration system. Forty images (eight 5-image cross patterns - center and the cardinal points - with 30$`\mathrm{}`$ separation) of 3-min duration each (12 coadded 15-sec integrations) were obtained for a total on-source time of 120 minutes. Images were then sky-subtracted using sky frames generated from the running median of 6 neighbors, divided by the flatfield, and spatially registered using stars within each field. The 40 images were shifted and median filtered into the 120-min composite. Star HR 2015 (McGregor 1994) was employed as an $`L`$-band standard to zero-point calibrate the GRB990705 field. ### 2.2 Optical data Optical imaging of the GRB990705 error box was obtained at Paranal (Chile) with the 8.2-meter ESO VLT-UT1 (“Antu”) plus FORS1 (detector scale: 0$`\stackrel{}{.}`$2 pixel<sup>-1</sup>; field of view: 6$`\stackrel{}{.}`$8$`\times `$6$`\stackrel{}{.}`$8) on 1999 July 6.4, 8.4 and 10.4 in the $`V`$ band, and at La Silla (Chile) with the 2.2-meter MPG/ESO telescope plus WFI (8 CCD mosaic – detector scale: 0$`\stackrel{}{.}`$238 pixel<sup>-1</sup>; field of view: 34$`\mathrm{}`$$`\times `$33$`\mathrm{}`$) on 1999 July 6.4 ($`B`$ band) and July 7.4 ($`V`$ band). The complete log of the optical observations is reported in Table 1. Images were debiased and flat-fielded with the standard cleaning procedure; each set of $`V`$ frames of July 6, 7, and 10 was then co-added to increase the signal-to-noise ratio. We then chose, when applicable, PSF-fitting photometry as the measurement technique for the magnitude of point-like objects because the field is quite crowded (especially in the case of deep images) being located in the outskirts of the Large Magellanic Cloud (LMC). Photometry was performed on the images using the DAOPHOT II data analysis package PSF-fitting algorithm (Stetson 1987) within MIDAS. In order to calibrate the images to the Johnson-Kron-Cousins photometric system, we acquired on July 7 a $`B`$ frame of part of Selected Area 95 with the 2.2-meter telescope and on July 10 $`V`$ frames of the PG 1323$``$086, PG 2213$``$006 and Mark A sequences (Landolt 1992) with Antu; we adopted as airmass extinction coefficients 0.11 for the $`V`$ and 0.25 for the $`B`$. With this photometric calibration, for comparison, the USNO-A1.0 star U0150\_02651600, with coordinates (J2000) $`\alpha `$ = 5<sup>h</sup> 09<sup>m</sup> 42$`\stackrel{s}{.}`$59, $`\delta `$ = $``$72 07$`\mathrm{}`$ 41$`\stackrel{}{.}`$2, has $`B`$ = 18.72 and $`V`$ = 17.64. Unfortunately, the $`B`$ calibration frames were obtained under poor photometric conditions (see Table 1) and therefore the uncertainty on the zero point of the calibration ($`\pm `$0.25 mag) is by far the main source of error in the measure of the $`B`$ magnitude of the USNO star. No color term was applied in the $`V`$-band calibration since only $`V`$ frames were taken on July 10; so, the uncertainty on the $`V`$ zero point is $`\pm `$0.15 mag. These large errors are also due to the high airmass (larger than 2) affecting our observations. The $`B`$ and $`V`$ magnitude errors quoted in the next section are only statistical and do not contain any possible zero point offset. We evaluated the Galactic hydrogen column density in the direction of GRB990705 using the NRAO maps by Dickey & Lockman (1990), from which we obtained $`N_\mathrm{H}`$ = 0.72$`\times `$10<sup>21</sup> cm<sup>-2</sup> and, using the empirical relationship by Predehl & Schmitt (1995), we computed a foreground Galactic absorption $`A_V`$ = 0.40. This, by applying the law by Rieke & Lebofsky (1985), corresponds to $`E(BV)`$ = 0.13 and to $`E(VH)`$ = 0.33; using the law by Cardelli et al. (1989) we then derived $`A_B`$ = 0.53 and $`A_H`$ = 0.07. The intrinsic value of $`N_\mathrm{H}`$ in that region of the LMC is less than $``$10<sup>19</sup> cm<sup>-2</sup> (McGee et al. 1983); therefore, the reddening induced by the LMC on the NIRT/OT is practically negligible. ## 3 Results The summed 20-min NTT image of July 5.9 (Fig. 1, left panel) shows an object at a magnitude $`H`$ = 16.57 $`\pm `$ 0.05 which in the July 6.4 8-min image is detected at $`H`$ = 18.38 $`\pm `$ 0.05. On July 6.9 the object magnitude is $`H`$ $`>`$ 19.9 at a 3$`\sigma `$ level (Fig. 1, right panel). Astrometry done on the first NTT observation using several stars from the USNO-A1.0 catalogue gives for this fading source coordinates $`\alpha `$ = 5<sup>h</sup> 09<sup>m</sup> 54$`\stackrel{s}{.}`$52, $`\delta `$ = $``$72 07$`\mathrm{}`$ 53$`\stackrel{}{.}`$1 (J2000) with a 1-$`\sigma `$ accuracy of 0$`\stackrel{}{.}`$3. This object is inside the intersection of all the mentioned X–ray error boxes, and almost at the center of the BeppoSAX WFC error circle. Moreover, the observed brightness variation and the variability timescale are similar to those of previously observed optical afterglows. This leads us to conclude that it is the NIR afterglow of GRB990705 (we can exclude a LMC microlensing event since these phenomena show a completely different behaviour: see Sackett 1999 and references therein). No object is detected in the $`L`$-band July 7.6 composite image. An upper limit $`L`$ $`>`$ 13.9 with a 3$`\sigma `$ significance within a 3 pixel radius aperture at the location corresponding to the $`H`$-band detection is measured. Assuming a temporal power-law decay $`Ft^\alpha `$ between the two $`H`$-band detections, we find that $`\alpha `$ = 1.68 $`\pm `$ 0.10. Including the $`H`$-band upper limit and fitting a power law the decay exponent is $`\alpha `$ = 1.84 $`\pm `$ 0.05, but the fit is not acceptable ($`\chi _\nu ^2`$ = 16.8). In the following we will thus consider $`\alpha `$ = 1.7 as the decay index of the early part of the afterglow while, $``$1 day after the GRB, the transient has probably started a faster decay with a power-law index $`\alpha `$$`\mathrm{}`$ $`>`$ 2.6, based on the second $`H`$-band detection and the $`H`$-band upper limit. We note however that the paucity of the data makes difficult to precisely locate the epoch at which the decay slope has changed. Antu $`V`$-band observations, albeit with lower significance, due to the faintness of the object and to poorer weather conditions (see Table 1), are consistent with the NIR decay: indeed a fading optical object at a position consistent with that of the NIRT is detected. On July 6.4 this object was at $`V`$ = 22.0 $`\pm `$ 0.2, while two days later it was below the limiting magnitude of the frame ($`V`$ = 23.0, 3$`\sigma `$ level). These values indicate a power-law decay with an index $`\alpha >`$ 1.0, consistent with the NIR observations. The object is not seen in the 2.2-meter $`B`$ frame of July 6 down to a limiting magnitude of $`B`$ = 21.9 and in the $`V`$ frame acquired with the same telescope on July 7 down to $`V`$ = 22.3 (both values have a 3$`\sigma `$ significance). The NIR and optical photometry is reported in Fig. 2, where the $`H`$-band early decay is also modeled as a power law. Inspection of the Antu summed 30-min image of July 10 (see Fig. 3, left panel) shows an irregular extended object at the location of the NIRT and of the OT. We estimate that the magnitude of this object ($``$ 2$`\stackrel{}{.}`$4$`\times `$0$`\stackrel{}{.}`$8 in size) is $`V`$ = 23.8 $`\pm `$ 0.2. Due to the poor resolution we are not able to exclude that this feature (or a portion of it) is due to the contribution of many unresolved sources in the LMC and/or background faint sources. Our photometric analysis of the field, however, reveals only one possible point-like object at $`V`$ = 23.99 $`\pm `$ 0.07 in this area southward of the NIRT. After comparison of positions of field stars in Antu and NTT images, the position of the point-like object is not completely consistent with that of the NIRT, being $``$1$`\stackrel{}{.}`$2 (i.e. more than 4$`\sigma `$) away from it (in the first Antu image, where the OT is detected, the extremely bad seeing hampers a significant positional comparison). Assuming that the point-like source is the transient, it is hard to explain this offset as the result of a possible contribution of the extended source in the first $`H`$-band observation, because if this effect were present, the centroid of the NIRT+galaxy blend would be expected to be closer to the fuzziness center than that observed in the first NTT observation. Moreover, the $`V`$-band temporal decay would be much slower than that in the NIR ($`\alpha _V`$ $``$ 0.9). Therefore we suggest that the point-like object seen on July 10 is unrelated to the GRB, and might rather be a structure of the host galaxy, or possibly a foreground star. The transient, as observed about 0.8 days after the GRB trigger, is fairly red. In order to evaluate its color index between the $`V`$ and $`H`$ bands, we assumed in the $`V`$ band a temporal decay similar to that observed in the $`H`$ band and computed the $`V`$ magnitude at the epoch of the second $`H`$-band observation (July 6.416). Then, we subtracted from this $`V`$ magnitude the contribution of the nebulosity and of the neighboring point-like object (which is blended with the OT in the July 6.4 image) and from the $`H`$ magnitude of July 6.4 the upper limit of the third $`H`$-band observation ($`H`$ $`>`$ 19.9). Finally, we corrected for the Galactic extinction in the direction of GRB990705. We find $`(VH)_{\mathrm{OT}}`$ = 3.5 $`\pm `$ 0.2 on July 6.416; if instead the host galaxy contribution is much fainter than the third epoch upper limit, and therefore negligible in the second $`H`$-band measurement, the color index is $`(VH)_{\mathrm{OT}}`$ = 3.8 $`\pm `$ 0.2. Both values imply a spectral slope $`\beta `$ $``$ 2 assuming a spectral energy distribution $`F_\nu \nu ^\beta `$. This spectral slope would result in a magnitude $`L`$ $``$ 15.4 at that epoch, i.e. $``$1.5 magnitudes fainter than, and therefore fully consistent with, the upper limit from the SPIREX observation of more than one day after. ## 4 Discussion: a “red-heat” GRB afterglow The afterglow of GRB990705 is an unprecedented case of a GRB counterpart first clearly detected in the NIR band. The detection of a possible underlying galaxy might support the extragalactic nature of this GRB although we cannot completely rule out an association with the LMC. If the extended emission detected is the host galaxy of GRB990705 it seems to have a rather knotty and irregular shape since no regular pattern of increasing surface brightness is observed in this structure (see Fig. 3, right panel). The present data therefore suggest that the host of GRB990705 is an irregular (possibly starburst) galaxy as was proposed in other cases of GRB hosts (see e.g. Sahu et al. 1997 for the host of GRB970228, and Bloom et al. 1999 and Fruchter et al. 1999b for the host of GRB990123). With $`V`$ $``$ 23.1, and assuming it is an irregular starburst galaxy with a flat optical spectrum (Fruchter et al. 1999b), we obtain for this object $`R`$ 22.8. Using the cumulative surface density distribution of galaxies in the $`R`$ band by Hogg et al. (1997), the probability $`P_c`$ of a chance coincidence between the NIR/optical afterglow of GRB990705 and the detected galaxy can be evaluated. We have 2.5$`\times `$10<sup>4</sup> galaxies per square degree with $`R`$ 22.8; with this value the probability of finding by chance a galaxy within 3$`\sigma `$ from the position of the NIRT is $`P_c`$ 0.006. This probability suggests the identification of this object with the host galaxy, although it is not completely conclusive. This putative galaxy has an integrated (point-like source plus extended object) unabsorbed magnitude $`V_0`$ = 22.74 $`\pm `$ 0.15, an extension of $``$2 square arcsecs and an irregular shape (see Fig. 3); therefore, it might be one of the brightest and most extended among the host galaxies of GRBs with known redshifts (GRB970228: Sahu et al. 1997, Fruchter et al. 1999a, Djorgovski et al. 1999; GRB970508: Bloom et al. 1998a, Fruchter et al. 1999c; GRB971214: Kulkarni et al. 1998, Odewahn et al. 1998; GRB980703: Bloom et al. 1998b, Vreeswijk et al. 1999; and GRB990123: Kulkarni et al. 1999, Bloom et al. 1999, Fruchter et al. 1999b). This might suggest that this object is nearer than the other GRBs. The decay slope of this afterglow, $`\alpha `$ = 1.7, is rather steep, although not as steep as observed for GRB980326 (Groot et al. 1998) and GRB980519 (Djorgovski et al. 1998, Halpern et al. 1999). From the $`H`$-band light curve decay index we estimate an electron power-law distribution index $`p`$ $``$ 3 (Sari et al. 1999). As already outlined in Sect. 3, the $`H`$ magnitude of the NIRT on July 6.9 is significantly below the extrapolation of the early decay (see Fig. 2). This strongly suggests a break in the NIRT $`H`$-band light curve at $``$1 day after the GRB and a subsequent steepening, similar to those exhibited by the afterglows of GRB990123 (e.g. Castro-Tirado et al. 1999) and of GRB990510 (e.g. Stanek et al. 1999). The break cannot be accounted for by the electron cooling frequency $`\nu _\mathrm{c}`$ moving through the $`H`$ band since the expected slope change ($`\mathrm{\Delta }\alpha `$ $``$ 0.25; Sari et al. 1998) would be much smaller than observed ($`\mathrm{\Delta }\alpha `$ $``$ 1). A spherical scenario in which an extremely dense surrounding medium decelerates the expanding blastwave could also produce a steepening of the light curve as envisaged by Dai & Lu (1999). On the other hand, the steepness of the lightcurve decay might suggest beamed emission (Sari et al. 1999). Assuming that a break due to jet spreading occurred in the $`H`$-band light curve of the GRB990705 NIRT about one day after the GRB, the slope $`\alpha `$$`\mathrm{}`$ $`>`$ 2.6 would be roughly consistent with the expected value ($`\alpha `$$`\mathrm{}`$ = $`p`$ 3; Sari et al. 1999). If we place the break at the epoch of the second $`H`$-band measurement, i.e. $``$18 hours after the GRB trigger, assuming a total isotropically-emitted energy of the ejecta of 10<sup>52</sup> erg and a local interstellar medium density of 1 cm<sup>-3</sup>, we obtain that the angular width of the jet is $`\theta _0`$ 0.1, consistent with expected opening angles of GRB jets (Sari et al. 1999, Postnov et al. 1999). This afterglow is also one of the reddest observed so far, with an optical/NIR color on July 6.4 similar to that of the OT of GRB980329 (Palazzi et al. 1998). We note that the NIR/optical spectral slope $`\beta `$ of the transient on July 6.4 and the measured index of the temporal decay $`\alpha `$ would be inconsistent both with the spherical expansion of a relativistic blast wave (assumed as a valid approximation of an initially strongly beamed jet) and with a beamed expansion (Sari et al. 1999). Under the hypothesis that the optical/NIR spectrum is considerably reddened by absorption within the host galaxy, we corrected it using the extinction law of a typical starburst at various redshifts (Calzetti 1997). This approach, which we have adopted also for other GRBs with encouraging results (Palazzi et al. 1998, Dal Fiume et al. 2000), can be justified under the assumption that the heavy obscuration of this afterglow is due to its location in a high density, and probably star-forming, region. We find a consistency with the expectation of the model by Sari et al. (1999) either for a redshift $`z`$ 2 or for a redshift $`z`$ 0.1. In both cases $`\nu _\mathrm{c}`$ must be above the NIR frequencies, which is a reasonable finding given that the optical/NIR spectrum is measured at an early epoch after the GRB. Since the host galaxy is bright and rather large in angular size, we tend to favor the latter redshift estimate. This result has to be taken with caution, being based on a series of assumptions and on a single color index, and therefore affected by a large uncertainty. However, if it were correct, the emitted $`\gamma `$-ray output, for a fluence of 7.5$`\times `$10<sup>-5</sup> erg cm<sup>-2</sup> (Amati et al., in preparation), would be 1.7$`\times `$10<sup>51</sup> erg (assuming a standard Friedmann model cosmology with $`H_0`$ = 70 km s<sup>-1</sup> Mpc<sup>-1</sup> and $`q_0`$ = 0.15), in the range of $`\gamma `$-ray energies typically measured for GRBs. ###### Acknowledgements. M. Méndez is a fellow of the Consejo Nacional de Investigaciones Científicas y Técnicas de la República Argentina. This research was supported in part by the National Science Foundation under a cooperative agreement with the Center for Astrophysical Research in Antarctica (CARA), grant number NSF OPP 89-20223. CARA is a National Science Foundation Science and Technology Center. Principal collaborators in the SPIREX/ABU project include the University of Chicago, the National Optical Astronomy Observatories, the Rochester Institute of Technology, and the University of New South Wales. We owe special thanks for the SPIREX/ABU data to Ian Gatley, Nigel Sharp, Al Fowler and Harvey Rhody. We also thank the referee J. Lub for his comments.
no-problem/9912/cond-mat9912242.html
ar5iv
text
# From spatially indirect to momentum-space indirect exciton by in-plane magnetic field \[ ## Abstract In-plane magnetic field is found to change drastically the photoluminescence spectra and kinetics of interwell excitons in GaAs/Al<sub>x</sub>Ga<sub>1-x</sub>As coupled quantum wells. The effect is due to the in-plane magnetic field induced displacement of the interwell exciton dispersion in a momentum space, which results in the transition from the momentum-space direct exciton ground state to the momentum-space indirect exciton ground state. In-plane magnetic field is, therefore, an effective tool for the exciton dispersion engineering. \] The effective exciton temperature in a quasiequilibrium system of excitons in semiconductors is determined by the ratio between the exciton energy relaxation rate and the exciton recombination rate. The low exciton temperature is crucial for an observation of a number of novel collective phenomena caused by the high occupation of the lowest energy exciton states in a quasi-two-dimensional (2D) exciton system . Low temperatures can be achieved in a system with a low exciton recombination rate. A long exciton lifetime is characteristic (1) for the systems where the ground state exciton is optically inactive (in dipole approximation) because of parity, e.g. for Cu<sub>2</sub>O; (2) for the systems where electron and hole are spatially separated, e.g. for spatially indirect (interwell) excitons in direct-band-gap coupled quantum wells (CQWs) like $`\mathrm{\Gamma }X_z`$ AlAs/GaAs CQWs and GaAs/Al<sub>x</sub>Ga<sub>1-x</sub>As CQWs; (3) for the systems where the ground state exciton is indirect in a momentum space, e.g. for $`\mathrm{\Gamma }X_{xy}`$ AlAs/GaAs CQWs. Due to the coupling between the internal structure of magnetoexciton and its center-of-mass motion the ground exciton state in a direct-band-gap semiconductor in crossed electric and magnetic fields was predicted to be at finite momentum. In particular, the transition from the momentum-space direct exciton ground state to the momentum-space indirect exciton ground state was predicted (1) for interwell exciton in coupled quantum wells in in-plane magnetic field , and (2) for single layer exciton in in-plane electric field and perpendicular magnetic field . These effects should allow for a controllable variation of the exciton dispersion and increase of the exciton ground state lifetime. In the present paper we report on the experimental observation of the in-plane magnetic field induced transition from the momentum-space direct exciton ground state to the momentum-space indirect exciton ground state for the system of interwell excitons in GaAs/Al<sub>x</sub>Ga<sub>1-x</sub>As CQWs. The transition is identified by the drastic change of the exciton photoluminescence (PL) kinetics. The interwell exciton dispersion in in-plane magnetic field is analyzed theoretically and determined experimentally from the shift of the interwell exciton PL energy; the experimental data are found to be in a qualitative agreement with the theory. We suppose that the in-plane magnetic field is directed along the $`x`$-axis, and use the calibration $`A_x=A_z=0,A_y=Bz`$. Due to the invariance corresponding to the simultaneous magnetic translation of electron and hole on the same vector parallel to the CQW plane, i.e. the invariance to the translation and corresponding gauge transformation, the two-dimensional magnetic momentum is conserved . For the gauge used $$P_x=p_{ex}+p_{hx};P_y=p_{ey}+p_{hy}\frac{e}{c}B(z_ez_h).$$ (1) For $`z_{e,h}^{}`$ measured from the centers of the corresponding QW’s one obtains $`P_y=p_{ey}+p_{hy}\frac{e}{c}Bd\frac{e}{c}B(z_e^{}z_h^{})`$, where $`d`$ is the mean separation between the electron and hole layers. The value $`p_B=\frac{e}{c}Bd=\mathrm{}d/l_B^2`$ is the shift of the magnetic momentum of interwell exciton in the ground state (as follows from the analysis of Shrodinger equation); $`l_B=(\frac{eB}{\mathrm{}c})^{\frac{1}{2}}`$ is the magnetic length. The physical sense of this shift can be obtained from the analysis of the adiabatic turning on of the in-plane magnetic field. The appearance of the vortex electric field leads to the acceleration of interwell exciton. The final in-plane momenta is equal precisely to $`\frac{e}{c}Bd`$ and directed normal to the magnetic field. Therefore, the appearance of the momentum $`\frac{e}{c}Bd`$ is related to the diamagnetic response of electron-hole (exciton) system in CQW in in-plane magnetic field. The contribution of the second order on the magnetic field consist of two parts: the first, depending on the momenta, is the sum of Van Vleck paramagnetism of isolated QWs, and the second is the sum of diamagnetic shifts of isolated QWs . Van Vleck paramagnetism leads to the renormalization of the effective (magnetic) mass of exciton along the $`y`$-axis, $`M_{yy}`$. Therefore, the magnetoexciton dispersion law becomes anisotropic: $$M_{xx}=M=m_e+m_h;M_{yy}=M+\delta M(B),$$ (2) where $$\delta M(B)=\frac{e^2B^2}{c^2}(f_e+f_h),$$ (3) $$f_{e,h}=\underset{n}{}\frac{\left|<0\left|z_{e,h}\right|n>\right|^2}{E_0E_n}.$$ (4) Here $`E_n,|n>`$ are the energies and state vectors corresponding to the size quantization of QW’s; $`f_{e,h}`$ are related to the electric polarization of QW’s in the $`z`$-direction. $`\delta M(B)>0.`$ The estimate results to: $$\frac{\delta M_{yy}}{M}\frac{E_{diam}}{E}\left(\frac{L_z}{l_B}\right)^4,$$ (5) where $`E_{diam}`$ is the diamagnetic shift in isolated QW, $`E`$ is the size quantized excitation energy of QW, $`L_z`$ is the QW width. The electric-field-tunable $`n^+in^+`$ GaAs/AlGaAs CQW structure was grown by MBE. The $`i`$-region consists of two 8 nm GaAs QWs separated by a 4 nm Al<sub>0.33</sub>Ga<sub>0.67</sub>As barrier and surrounded by two 200 nm Al<sub>0.33</sub>Ga<sub>0.67</sub>As barrier layers. The $`n^+`$-layers are Si-doped GaAs with $`N_{\mathrm{Si}}=5\times 10^{17}`$ cm<sup>-3</sup>. The electric field in the z-direction is monitored by the external gate voltage $`V_g`$ applied between $`n^+`$-layers. Carriers were photoexcited by a pulsed semiconductor laser ($`\mathrm{}\omega =1.85`$ eV, the pulse duration was about 50 ns, the edge sharpness including the system resolution was $`0.5`$ ns, the repetition frequency was 1 MHz, $`W_{ex}=10`$ W/cm<sup>2</sup>). In order to minimize the effect of the mesa heating, we worked with a mesa area of $`0.2\times 0.2`$ mm<sup>2</sup>, which was much smaller than the sample area of about 4 mm<sup>2</sup>. In addition, the bottom of the sample was soldered to a metal plate. The PL measurements were performed in a He cryostat by means of an optical fiber with diameter 0.1 mm positioned 0.3 mm above the mesa. The PL spectra and kinetics were measured by a time correlated photon counting system. The separation of electrons and holes in different QWs (the indirect regime) is realized by applying a finite gate voltage which fixes external electric field in the $`z`$-direction $`F=V_g/d_0`$, where $`d_0`$ is the $`i`$-layer width . The spatially direct (intrawell) and indirect (interwell) transitions are identified by the PL kinetics and gate voltage dependence: the intrawell PL line has short PL decay time and its position practically does not depend on $`V_g`$, while interwell PL line has long PL decay time and shifts to lower energies with increasing $`V_g`$ (the shift magnitude is given by $`eFd`$) . The upper and the lower direct transitions are related to the intrawell 1s heavy hole exciton $`X`$ and intrawell charged complexes $`X^+`$ and $`X^{}`$ . With increasing in-plane magnetic field the energy of interwell exciton increases while the energies of the direct transitions are almost unaffected (Fig. 1a). This behaviour is consistent with a displacement of the interwell exciton dispersion in in-plane magnetic field. The scheme of the interwell exciton dispersion at zero and finite in-plane magnetic field is shown in Fig. 2. For delocalized 2D excitons only the states with small center-of-mass momenta $`kk_0E_g/\mathrm{}c`$ (where $`c`$ is the speed of light in the medium), i.e. within the radiative zone, can decay radiatively (see Fig. 2). For GaAs structures $`k_03\times 10^5`$ cm<sup>-1</sup> and is much smaller than $`k_B=p_B/\mathrm{}`$ in strong fields (at $`B=10`$ T $`k_B2\times 10^6`$ cm<sup>-1</sup>). The energy of the interwell exciton PL is set by the energy of the radiative zone. Therefore, as the diamagnetic shift of the bottom of the subbands is small and can be neglected to the first approximation, the interwell exciton PL energy in in-plane magnetic field should be increased by $`E_{p=0}=p_B^2/2M=e^2d^2B^2/2Mc^2`$. In particular, the in-plane magnetic field dependence of the interwell PL energy could be used for the measurement of the exciton dispersion because it allows to access high momentum exciton states. At small fields the measured PL energy shift rate is 0.062 meV/T<sup>2</sup> (see inset to Fig. 1a) which corresponds to $`M=0.21m_0`$. This value is close to the calculated mass of heavy hole exciton in GaAs QWs $`0.25m_0`$ ($`m_e=0.067m_0`$ and the calculated in-plane heavy hole mass near $`k=0`$ is reported to be $`m_h=0.18m_0`$, see Ref. and references therein). Figure 1 shows considerable deviation from the quadratic dependence of the interwell exciton PL energy at high $`B`$. This deviation is the consequence of the interwell exciton mass renormalization due to in-plane magnetic field which is predicted by theory, see Eqs. 2-4. Due to the estimate of Eq. 5 the deviation of the interwell exciton dispersion from the quadratic one should become essential in the magnetic field where $`l_BL_z`$. This is in a good qualitative agreement with the experiment which presents the onset of the deviation at $`B8`$ T, where $`l_B=9`$ nm $`L_z=8`$ nm (see inset to Fig. 1a). A possible contribution from the hole dispersion nonparabolicity to the observed increase of the exciton mass is, apparently, a minor effect for the small exciton energies considered which are much smaller than the light- heavy-hole splitting equal to 17 meV for the CQW studied . Note that the development of indirect gap when an in-plane magnetic field is applied has been observed in asymmetric modulation doped single quantum well where the centers of the electron and hole envelopes do not coincide; due to the free carrier character of recombination in the studied 2D electron gas, the PL energy shift corresponded to the electron mass . In-plane magnetic field modifies qualitatively the exciton PL kinetics (Fig. 1b). The main feature of the interwell exciton PL kinetics at zero magnetic field is a sharp enhancement of the PL intensity after the excitation is switched off - the PL-jump (Fig. 1). The basis of the effect is the following. The exciton PL kinetics is determined by the kinetics of occupation of the radiative zone (marked bold in Fig. 2). The occupation varies due to the exciton recombination and energy relaxation. The PL-jump denotes a sharp increase of the occupation of the optically active exciton states just after the excitation is switched off. It is induced by the sharp reduction of the effective exciton temperature, $`T_{eff}`$, due to the fast decay of the nonequilibrium phonon occupation and energy relaxation of hot photoexcited excitons, electrons, and holes . The disappearance of the PL-jump at high in-plane magnetic fields is consistent with the displacement of the interwell exciton dispersion in parallel magnetic field: for momentum-space indirect exciton a sharp reduction of $`T_{eff}`$ just after the excitation is switched off should not result in the increase of occupation of the radiative zone (see Fig. 2c) and, hence, the PL intensity. The measured radiative decay rate is proportional to the fraction of excitons in the radiative zone. The observed strong reduction of the radiative decay rate in in-plane magnetic field (by more than 20 times in $`B=12`$ T, see Fig. 1b) also reflects the displacement of the interwell exciton dispersion and, correspondingly, nonradiative character of the ground exciton state in parallel magnetic field. In high in-plane magnetic field the radiative decay rate becomes comparable and smaller than the nonradiative decay rate which results to the observed quenching of the interwell exciton PL intensity (Fig. 1a). The unambiguous evidence for the nonradiative character of the ground state of interwell exciton in parallel magnetic field has been observed from the temperature dependence of the PL kinetics (Fig. 3). At zero field the exciton recombination rate monotonically reduces with increasing temperature (Fig. 3b) due to the thermal reduction of the radiative zone occupation . In high in-plane magnetic field the temperature dependence is opposite: the exciton recombination rate $`\mathrm{𝑒𝑛ℎ𝑎𝑛𝑐𝑒𝑠}`$ with increasing temperature (Fig. 3a) due to the increasing occupation of the radiative zone. Correspondingly, with increasing temperature the interwell exciton PL intensity reduces at zero field and enhances in high in-plane magnetic field (see insets to Fig. 3). In conclusion, we have observed drastic change of the photoluminescence spectra and kinetics of interwell excitons in GaAs/Al<sub>x</sub>Ga<sub>1-x</sub>As CQWs in in-plane magnetic field. The effect is due to the in-plane magnetic field induced displacement of the indirect exciton dispersion in a momentum space, which results in the transition from the momentum-space direct exciton ground state to the momentum-space indirect exciton ground state. In-plane magnetic field is, therefore, an effective tool for the exciton dispersion engineering and controllable increase of the exciton ground state lifetime. We speculate that it can be used for the experimental realization of an ultra-low-temperature exciton gas, which might result in an observation of predicted collective phenomena caused by the high occupation of the lowest energy exciton states. In addition, the renormalization of the exciton mass due to in-plane magnetic field was observed; the experimental data are in a qualitative agreement with the theory. We thank A. Imamoglu for discussions. We became aware on the studies of cw PL of interwell excitons in in-plane magnetic field . We are grateful to the authors of Ref. for providing us by their unpublished data and for discussions. We acknowledge support from INTAS, the Russian Foundation for Basic Research, and the Programme ”Physics of Solid State Nanostructures” from the Russian Ministry of Sciences.
no-problem/9912/hep-ph9912469.html
ar5iv
text
# BFKL Dynamics at Hadron Colliders ## 1 Introduction There has been considerable interest in recent years in QCD scattering processes in the so-called ‘high-energy limit’, i.e. processes in which $`𝒔\mathbf{}\mathbf{|}𝒕\mathbf{|}\mathbf{}𝚲_{\mathrm{𝐐𝐂𝐃}}`$. The corresponding cross sections are controlled by BFKL dynamics , in which large $`\mathrm{𝐥𝐧}\mathbf{(}𝒔\mathbf{/}\mathbf{|}𝒕\mathbf{|}\mathbf{)}`$ logarithms arising from soft and virtual gluon emissions are resummed to all orders in perturbation theory. In the leading logarithm approximation, the energy dependence of the cross section is controlled by the (hard) BFKL pomeron: $`𝝈\mathbf{}𝒔^𝝀`$ with $`𝝀\mathbf{=}𝜶_𝒔\mathrm{𝟏𝟐}\mathrm{𝐥𝐧}\mathrm{𝟐}\mathbf{/}𝝅`$. The paradigm BFKL process is deep inelastic scattering at small Bjorken $`𝒙`$, for which $`𝒕\mathbf{}\mathbf{}𝑸^\mathrm{𝟐}`$, $`𝒔\mathbf{}𝑸^\mathrm{𝟐}\mathbf{/}𝒙`$. Resummation of the leading $`𝜶_𝒔\mathrm{𝐥𝐧}\mathbf{(}\mathrm{𝟏}\mathbf{/}𝒙\mathbf{)}`$ logarithms leads to the characteristic $`𝑭_\mathrm{𝟐}\mathbf{}𝒙^\mathbf{}𝝀`$ behaviour for the structure function as $`𝒙\mathbf{}\mathrm{𝟎}`$. However it has proved difficult in practice to disentangle BFKL and ordinary DGLAP physics at currently accessible $`𝒙`$ and $`𝑸^\mathrm{𝟐}`$ values. One is then led to consider whether hadron colliders such as the Tevatron and LHC can offer a more definitive test of BFKL small-$`𝒙`$ dynamics. It was first pointed out by Mueller and Navelet that production of jet pairs with modest transverse momentum $`𝒑_𝑻`$ and large rapidity separation $`𝚫𝒚`$ at hadron colliders would be a particularly clean environment in which to study BFKL dynamics. At asymptotic separations the subprocess cross section is predicted to increase as $`\widehat{𝝈}_{𝒋𝒋}\mathbf{}\mathrm{𝐞𝐱𝐩}\mathbf{(}𝝀𝚫𝒚\mathbf{)}`$. To understand the special features of BFKL dynamics, it will be essential not only to study such fully inclusive cross sections, but also to investigate the structure of the associated final states. For the large $`𝚫𝒚`$ dijet cross section, for example, one expects an increasingly large number of ‘mini-jets’, with transverse momenta of order $`𝒑_𝑻`$, produced in the central region. More generally, one can use BFKL dynamics to predict the expected number of such mini-jets in any small-$`𝒙`$ hard scattering process at hadron colliders. In this note we discuss two tests of BFKL dynamics at hadron colliders: the inclusive dijet cross section and the associated multiplicity of mini-jets in Higgs production. ## 2 Dijet cross sections at large rapidity separation We wish to describe events in hadron collisions containing two jets with relatively small transverse momenta $`𝒑_{𝑻\mathrm{𝟏}}\mathbf{,}𝒑_{𝑻\mathrm{𝟐}}`$ and large rapidity separation $`𝚫𝒚\mathbf{}𝒚_\mathrm{𝟏}\mathbf{}𝒚_\mathrm{𝟐}`$. Defining $`𝚫\mathit{\varphi }\mathbf{}\mathbf{|}\mathit{\varphi }_\mathrm{𝟏}\mathbf{}\mathit{\varphi }_\mathrm{𝟐}\mathbf{|}\mathbf{}𝝅`$ to be the relative azimuthal angle between the jets, the leading-logarithm BFKL prediction for the ($`𝒈𝒈`$) subprocess cross section integrated over $`𝒑_{𝑻\mathrm{𝟏}}\mathbf{,}𝒑_{𝑻\mathrm{𝟐}}\mathbf{>}𝒑_𝑻`$ is $$\frac{𝒅\widehat{𝝈}_{𝒈𝒈}}{𝒅𝚫\mathit{\varphi }}\mathbf{|}_{𝒑_{𝑻\mathrm{𝟏}}^\mathrm{𝟐}\mathbf{,}𝒑_{𝑻\mathrm{𝟐}}^\mathrm{𝟐}\mathbf{>}𝒑_𝑻^\mathrm{𝟐}}\mathbf{=}\frac{\mathrm{𝟗}𝜶_𝒔^\mathrm{𝟐}𝝅}{\mathrm{𝟐}𝒑_𝑻^\mathrm{𝟐}}\frac{\mathrm{𝟏}}{\mathrm{𝟐}𝝅}\underset{𝒏\mathbf{=}\mathbf{}\mathbf{}}{\overset{\mathbf{+}\mathbf{}}{\mathbf{}}}\mathrm{𝐞𝐱𝐩}𝒊𝒏𝚫\mathit{\varphi }𝑪_𝒏\mathbf{(}𝒕\mathbf{)}\mathbf{,}$$ (1) with $`𝒕\mathbf{=}\mathrm{𝟑}𝜶_𝒔𝚫𝒚\mathbf{/}𝝅`$ and $`𝑪_𝒏\mathbf{(}𝒕\mathbf{)}`$ $`\mathbf{=}`$ $`{\displaystyle \frac{\mathrm{𝟏}}{\mathrm{𝟐}𝝅}}{\displaystyle \mathbf{}_{\mathbf{}\mathbf{}}^\mathbf{+}\mathbf{}}{\displaystyle \frac{𝒅𝒛}{𝒛^\mathrm{𝟐}\mathbf{+}\frac{\mathrm{𝟏}}{\mathrm{𝟒}}}}\mathrm{𝐞𝐱𝐩}\mathbf{\left(}\mathrm{𝟐}𝒕𝝌_𝒏\mathbf{(}𝒛\mathbf{)}\mathbf{\right)}\mathbf{,}`$ $`𝝌_𝒏\mathbf{(}𝒛\mathbf{)}`$ $`\mathbf{=}`$ $`\mathrm{𝐑𝐞}\mathbf{\left[}𝝍\mathbf{(}\mathrm{𝟏}\mathbf{)}\mathbf{}𝝍\mathbf{\left(}\frac{\mathrm{𝟏}}{\mathrm{𝟐}}\mathbf{(}\mathrm{𝟏}\mathbf{+}\mathbf{|}𝒏\mathbf{|}\mathbf{)}\mathbf{+}𝒊𝒛\mathbf{\right)}\mathbf{\right]}\mathbf{,}`$ (2) where $`𝝍`$ is the digamma function. The total subprocess cross section, and its corresponding asymptotic behaviour, is then $$\widehat{𝝈}_{𝒈𝒈}\mathbf{=}\frac{\mathrm{𝟗}𝜶_𝒔^\mathrm{𝟐}𝝅}{\mathrm{𝟐}𝒑_𝑻^\mathrm{𝟐}}𝑪_\mathrm{𝟎}\mathbf{(}𝒕\mathbf{)}\mathbf{,}𝑪_\mathrm{𝟎}\mathbf{(}𝒕\mathbf{)}\mathbf{\{}\begin{array}{cc}\mathbf{=}\mathrm{𝟏}\hfill & \text{for}𝒕\mathbf{=}\mathrm{𝟎}\hfill \\ \mathbf{}\mathbf{\left[}\frac{\mathrm{𝟏}}{\mathrm{𝟐}}𝝅\mathrm{𝟕}𝜻\mathbf{(}\mathrm{𝟑}\mathbf{)}𝒕\mathbf{\right]}^{\mathbf{}\mathrm{𝟏}\mathbf{/}\mathrm{𝟐}}\mathrm{𝐞𝐱𝐩}\mathbf{(}\mathrm{𝟒}\mathrm{𝐥𝐧}\mathrm{𝟐}𝒕\mathbf{)}\hfill & \text{for}𝒕\mathbf{}\mathbf{}\hfill \end{array}$$ (3) from which we see the characteristic BFKL prediction of an exponential increase in the cross section with large $`𝚫𝒚`$. It can also be seen from (1) that the average cosine of the azimuthal angle difference $`𝚫\mathit{\varphi }`$ defined above is proportional to $`𝑪_\mathrm{𝟏}\mathbf{(}𝒕\mathbf{)}`$. In fact we have $$\mathbf{}\mathrm{𝐜𝐨𝐬}𝚫\mathit{\varphi }\mathbf{}\mathbf{=}\frac{𝑪_\mathrm{𝟏}\mathbf{(}𝒕\mathbf{)}}{𝑪_\mathrm{𝟎}\mathbf{(}𝒕\mathbf{)}}$$ (4) and as we shall see below, this falls off with increasing $`𝒕`$. Unfortunately the increase of $`\widehat{𝝈}`$ with $`𝚫𝒚`$ disappears when the subprocess cross section is convoluted with parton distribution functions (pdfs), which decrease with $`𝚫𝒚`$ more rapidly than $`\widehat{𝝈}`$ increases. This is illustrated in fig. 1. The subprocess cross section rise at large $`𝚫𝒚`$ becomes a shoulder in the hadron-level cross section, whose exact shape depends on the (large-$`𝒙`$) form of the pdfs. To avoid this pdf sensitivity, one can study the decorrelation in $`𝚫\mathit{\varphi }`$ that arises from emission of gluons between the jets; BFKL predicts (see eq. (4)) a stronger decorrelation than does fixed-order QCD, and this prediction should be relatively insensitive to the pdfs. In practice it is not useful to compare analytic asymptotic BFKL predictions directly with experiment because nonleading corrections can be large. In particular, in the analytic BFKL calculation that leads to (1,2) above, gluons can be emitted arbitrarily, with no kinematic cost, and energy and momentum are not conserved. In Ref. (see also ) a Monte Carlo approach is used in which the emitted gluons are subject to kinematic constraints (i.e. overall energy and momentum are conserved), and other nonleading effects such as the running of $`𝜶_𝒔`$ are included. Kinematic constraints are seen to have a significant effect, suppressing the emission of large numbers of energetic gluons. The effect is clearly visible in fig. 1 (solid lines) , where the ‘improved’ BFKL calculation actually gives a smaller cross section than that at lowest order. This is due to the sizeable increase in $`\widehat{𝒔}`$, and hence in the large $`𝚫𝒚`$ pdf suppression, due to the emitted BFKL gluons. The azimuthal decorrelation is also weaker in the more realistic BFKL calculation. This is illustrated in fig. 2, where we show the mean value of $`\mathrm{𝐜𝐨𝐬}𝚫\mathit{\varphi }`$ in dijet production in the improved BFKL MC approach (upper curves). The jets are completely correlated (i.e. back-to-back in the transverse plane) at $`𝚫𝒚\mathbf{=}\mathrm{𝟎}`$, and as $`𝚫𝒚`$ increases we see the characteristic BFKL decorrelation, followed by a flattening out and then an increase in $`\mathbf{}\mathrm{𝐜𝐨𝐬}𝚫\mathit{\varphi }\mathbf{}`$ as the kinematic limit is approached. Not surprisingly, the kinematic constraints have a much stronger effect when the $`𝒑_𝑻`$ threshold is set at $`\mathrm{𝟓𝟎}`$ GeV (dashed curve) than at $`\mathrm{𝟐𝟎}`$ GeV (solid curve); in the latter case more phase space is available to radiate gluons. We also show for comparison the decorrelation for dijet production at the Tevatron for $`𝒑_𝑻\mathbf{>}\mathrm{𝟐𝟎}`$ GeV. There we see that the lower collision energy (1.8 TeV) limits the allowed rapidity difference and substantially suppresses the decorrelation at large $`𝚫𝒚`$. Recent measurements of the dijet decorrelation by the D0 collaboration at the Tevatron are in reasonable agreement with the improved BFKL parton-level predictions. Note that the larger centre-of-mass energy compared to transverse momentum threshold at the LHC would seem to give it a significant advantage as far as observing BFKL effects is concerned. The lower set of curves in fig. 2 refer to Higgs production via the $`𝑾𝑾\mathbf{,}𝒁𝒁`$ fusion process $`𝒒𝒒\mathbf{}𝒒𝒒𝑯`$, and are included for comparison . This process automatically provides a ‘BFKL-like’ dijet sample with large rapidity separation, although evidently the jets are significantly less correlated in azimuthal angle. ## 3 Associated Jet Multiplicities in Higgs Production at the LHC One important aspect of the final state at the LHC is the number of mini-jets produced. By mini-jets we mean jets with transverse momenta above some resolution scale $`𝝁_\text{R}`$ which is very much smaller than the hard process scale $`𝑸`$. Then the mini-jet multiplicity at small $`𝒙`$ involves not only $`\mathrm{𝐥𝐧}𝒙\mathbf{}\mathrm{𝟏}`$ but also another large logarithm, $`𝑻\mathbf{=}\mathrm{𝐥𝐧}\mathbf{(}𝑸^\mathrm{𝟐}\mathbf{/}𝝁_\text{R}^\mathrm{𝟐}\mathbf{)}`$, which needs to be resummed. The results derived in include all terms of the form $`\mathbf{(}𝜶_\text{S}\mathrm{𝐥𝐧}𝒙\mathbf{)}^𝒏𝑻^𝒎`$ where $`\mathrm{𝟏}\mathbf{}𝒎\mathbf{}𝒏`$. Terms with $`𝒎\mathbf{=}𝒏`$ are called double-logarithmic (DL) while those with $`\mathrm{𝟏}\mathbf{}𝒎\mathbf{<}𝒏`$ give single-logarithmic (SL) corrections. In the calculations the BFKL formalism has been used, but the results are expected to hold also in the CCFM formalism based on angular ordering of gluon emissions. In order to find $`\overline{𝒓}\mathbf{(}𝒙\mathbf{)}`$, the mean number of resolved mini-jets as a function of $`𝒙`$, it is convenient to compute first the Mellin transform of this quantity $$\overline{𝒓}_𝝎\mathbf{=}\mathbf{}_\mathrm{𝟎}^\mathrm{𝟏}𝒅𝒙𝒙^𝝎\overline{𝒓}\mathbf{(}𝒙\mathbf{)}\mathbf{.}$$ (5) We find $$\overline{𝒓}_𝝎\mathbf{=}\mathbf{}\frac{\mathrm{𝟏}}{𝝌^{\mathbf{}}}\mathbf{\left(}\frac{\mathrm{𝟏}}{𝜸_\text{L}}\mathbf{+}\frac{𝝌^{\mathbf{\prime \prime }}}{\mathrm{𝟐}𝝌^{\mathbf{}}}\mathbf{+}𝝌\mathbf{\right)}𝑻\mathbf{}\frac{\mathrm{𝟏}}{\mathrm{𝟐}𝝌^{\mathbf{}}}𝑻^\mathrm{𝟐}$$ (6) where $`𝜸_\text{L}`$ is the Lipatov anomalous dimension which solves $$𝝎\mathbf{=}\mathbf{}\overline{𝜶}_\text{S}\mathbf{\left[}\mathrm{𝟐}𝜸_\text{E}\mathbf{+}𝝍\mathbf{(}𝜸\mathbf{)}\mathbf{+}𝝍\mathbf{(}\mathrm{𝟏}\mathbf{}𝜸\mathbf{)}\mathbf{\right]}\mathbf{}\overline{𝜶}_\text{S}𝝌\mathbf{(}𝜸\mathbf{)}\mathbf{.}$$ (7) Here $`\overline{𝜶}_\text{S}\mathbf{=}\mathrm{𝟑}𝜶_\text{S}\mathbf{/}𝝅`$, $`𝝍`$ is the digamma function and $`𝜸_\text{E}`$ the Euler constant. In eq. (6), $`𝝌^{\mathbf{}}`$ means the derivative of $`𝝌\mathbf{(}𝜸\mathbf{)}`$ evaluated at $`𝜸\mathbf{=}𝜸_\text{L}`$. The corresponding expression for the variance in the number of jets, $`𝝈_𝝎^\mathrm{𝟐}\mathbf{}\overline{𝒓^\mathrm{𝟐}}_𝝎\mathbf{}\overline{𝒓}_𝝎^\mathrm{𝟐}`$, is more complicated, see . To invert the Mellin transform (5), we can expand eq. (6) perturbatively as a series in $`\overline{𝜶}_\text{S}\mathbf{/}𝝎`$ and then invert term by term using $$\frac{\mathrm{𝟏}}{\mathrm{𝟐}𝝅𝒊}\mathbf{}_{{\scriptscriptstyle \frac{\mathrm{𝟏}}{\mathrm{𝟐}}}\mathbf{}𝒊\mathbf{}}^{{\scriptscriptstyle \frac{\mathrm{𝟏}}{\mathrm{𝟐}}}\mathbf{+}𝒊\mathbf{}}𝒅𝝎𝒙^{\mathbf{}𝝎\mathbf{}\mathrm{𝟏}}\mathbf{\left(}\frac{\overline{𝜶}_\text{S}}{𝝎}\mathbf{\right)}^𝒏\mathbf{=}\frac{\overline{𝜶}_\text{S}}{𝒙}\frac{\mathbf{[}\overline{𝜶}_\text{S}\mathrm{𝐥𝐧}\mathbf{(}\mathrm{𝟏}\mathbf{/}𝒙\mathbf{)}\mathbf{]}^{𝒏\mathbf{}\mathrm{𝟏}}}{\mathbf{(}𝒏\mathbf{}\mathrm{𝟏}\mathbf{)}\mathbf{!}}\mathbf{.}$$ (8) The factorial in the denominator makes the resulting series in $`𝒙`$-space converge very rapidly. It is then straightforward to compute the mini-jet multiplicity associated with pointlike scattering on the gluonic component of the proton at small $`𝒙`$ using $$𝒏\mathbf{(}𝒙\mathbf{)}\mathbf{=}\frac{𝑭\mathbf{(}𝒙\mathbf{,}𝑸^\mathrm{𝟐}\mathbf{)}\mathbf{}\overline{𝒓}\mathbf{(}𝒙\mathbf{)}}{𝑭\mathbf{(}𝒙\mathbf{,}𝑸^\mathrm{𝟐}\mathbf{)}}$$ (9) where $`𝑭\mathbf{(}𝒙\mathbf{,}𝑸^\mathrm{𝟐}\mathbf{)}`$ is the gluon structure function and $`\mathbf{}`$ represents a convolution in $`𝒙`$. As an application of these results, we can compute the mean value $`𝑵`$ and the dispersion $`𝝈_𝑵`$ of the associated (mini-)jet multiplicity in Higgs boson production at the LHC, assuming the dominant production mechanism to be gluon-gluon fusion. At zero rapidity we have gluon momentum fractions $`𝒙_\mathrm{𝟏}\mathbf{=}𝒙_\mathrm{𝟐}\mathbf{=}𝒙\mathbf{=}𝑴_𝑯\mathbf{/}\sqrt{𝒔}`$ where $`𝑴_𝑯`$ is the Higgs mass, and $`𝑵\mathbf{=}𝒏\mathbf{(}𝒙_\mathrm{𝟏}\mathbf{)}\mathbf{+}𝒏\mathbf{(}𝒙_\mathrm{𝟐}\mathbf{)}\mathbf{=}\mathrm{𝟐}𝒏\mathbf{(}𝒙\mathbf{)}`$. Similarly $`𝝈_𝑵^\mathrm{𝟐}\mathbf{(}𝒙\mathbf{)}\mathbf{=}𝝈_𝒏^\mathrm{𝟐}\mathbf{(}𝒙_\mathrm{𝟏}\mathbf{)}\mathbf{+}𝝈_𝒏^\mathrm{𝟐}\mathbf{(}𝒙_\mathrm{𝟐}\mathbf{)}\mathbf{=}\mathrm{𝟐}𝝈_𝒏^\mathrm{𝟐}\mathbf{(}𝒙\mathbf{)}`$. The results are shown in fig. 3. We see that in this case the DL results give an excellent approximation and the SL terms are less significant. The mini-jet multiplicity and its dispersion are rather insensitive to the Higgs mass at the energy of the LHC. The mean number of associated mini-jets is fairly low, such that the identification of the Higgs boson should not be seriously affected by them. ## References
no-problem/9912/hep-th9912025.html
ar5iv
text
# Untitled Document hep-th/9912025 IFP-9912-UNC HUB-EP-99/62 Horizontal and Vertical Five-Branes in Heterotic/F-Theory Duality Björn Andreas<sup>1</sup> bandreas@physics.unc.edu, supported by U.S. DOE grant DE-FG05-85ER40219 and Gottfried Curio<sup>2</sup> curio@physik.hu-berlin.de Department of Physics and Astronomy University of North Carolina, Chapel Hill, NC 27599-3255, USA Humboldt-Universität zu Berlin, Institut für Physik, D-10115 Berlin, Germany We consider the heterotic string on an elliptic Calabi-Yau three-fold with five-branes wrapping curves in the base (’horizontal’ curves) of the Calabi-Yau as well as some elliptic fibers (’vertical’ curves). We show that in this generalized set-up, where the association of the heterotic side with the $`F`$-theory side is changed relative to the purely vertical situation, the number of five-branes wrapping the elliptic fibers still matches the corresponding number of $`F`$-theory three-branes. 1. Introduction The compactification of the heterotic string on a Calabi-Yau three-fold with a vector bundle over it, which breaks part of the $`E_8\times E_8`$ gauge symmetry, leads to $`N=1`$ supersymmetric vacua in four dimensions. In particular the class of elliptically fibered Calabi-Yau three-folds $`Z`$ has the double advantage to admit a direct construction of the vector bundles and to allow a dual description in terms of $`F`$-theory compactified on elliptically fibered Calabi-Yau four-folds $`X`$ . To obtain a consistent heterotic compactification on elliptic Calabi-Yau three-folds, the anomaly cancellation condition requires to include a number $`a_f`$ of five-branes wrapping the elliptic fibers. It has been shown that these five-branes match precisely the number $`n_3`$ of space-time filling three-branes necessary for tadpole cancellation on the $`F`$-theory side , and so giving a heterotic string explanation of the number of three-branes. Here we will be interested in a slightly more general situation, where one admits also five-branes wrapping holomorphic curves in the base $`B_2`$ of the elliptic Calabi-Yau three-fold (we will take $`B_2`$ to be a Hirzebruch surface $`F_n`$ of $`n=0,1,2`$ or a del Pezzo surface $`dP_k`$, $`k=0,\mathrm{},8`$; the case of an Enriques surface could be treated along similar lines). The picture which emerges is, the five-branes wrapping the elliptic fibers map to the $`F`$-theory three-branes whereas a five-brane wrapping a curve in the base $`B_2`$ of the elliptic Calabi-Yau three-fold $`Z`$ corresponds to a blow-up of the base $`B_3`$ along the curve in the common $`B_2`$ of the elliptic Calabi-Yau four-fold $`X`$ on the $`F`$-theory side (recall that $`B_3`$ resp. $`X^4`$ is a $`P^1`$ resp. $`K3`$ fibration over $`B_2`$). Several aspects of this generalized set-up were discussed in ,,, mainly in the case of wrapping curves of genus zero in the base and in the extreme situation when the bundles have trivial structure group leading to an unbroken $`E_8\times E_8`$ gauge group. The possibility of wrapping five-branes on curves in the base (or even in $`Z`$) was also carefully discussed with many examples in ,,. Note that the $`g`$ $`U(1)`$ vector multiplets ($`g`$ being the genus of $`C`$) coming from reduction of the two-form of self-dual field-strength on the five-brane, are associated with corresponding modes from three-cycles in the blown-up base $`\stackrel{~}{B}_3`$ and the four-form of self-dual field-strength of type IIB on the $`F`$-theory side. Note that a curve lying in the base $`B`$ will have in general an interesting set of deformations which will play a role below. In the set-up with five-branes wrapping only fibers the brane-parameters on both sides took care of themselves. Remember that the ’three coordinates’ of such a three-brane, considered as a point in the $`𝐏^1`$ bundle $`B_3`$ over the visible $`B_2`$, are simply given by the base-point of the corresponding elliptic fiber on the heterotic side which is trivially given on the common $`B_2`$ and the ’third coordinate’, i.e. the fact that there is still a moduli space $`𝐏^1`$ in the game, is given by a complex scalar whose real resp. imaginary part is given by the position in the $`M`$-theory interval $`S^1/Z_2`$ in the eleventh dimension resp. the axion $`a`$ corresponding to the reduction of the self-dual three-form on the five-brane (note that this $`S^1`$ must collapse to a point at the boundaries of the interval - thus leading to the third, missing complex coordinate, parametrizing on the $`F`$-theory side the $`P^1`$ fiber over the common $`B_2`$ \- as there one has a transition to $`E_8`$ instantons without the interval position modulus and so without $`a`$ too (the five-brane three-form being odd it must vanish at the orbifold fixed planes anyway); cf. , ). Note also that the heterotic string on a wrapped $`T^2`$ is equivalent to the $`SO(32)`$ heterotic string and a small instanton-fivebrane has a non-perturbative $`SU(2)`$ which has as moduli space on $`T^2`$ exactly a $`𝐏^1`$ (in general an $`SU(n)`$ bundle on $`T^2`$ has as moduli space a $`𝐏^{n1}`$). Of course the expectation is still that the five-branes wrapping elliptic fibers correspond to the $`F`$-theory three-branes and this should not be changed by the occurrence of the ’other’, horizontal five-branes. However the actual proof of the matching in the purely vertical case did not associate these objects directly with each other (although of course this is the obvious underlying intuition) but rather computed the number of the relevant five-branes resp. three-branes independently on both sides in data of the common base $`B_2`$. To get the matching one had to use then an association of both sides (this was the relation $`\eta _{1,2}=6c_1(B_2)\pm t`$, respecting the condition $`\eta _1+\eta _2=12c_1(B_2)`$, where $`t`$ characterizes the $`P^1`$ fibration of $`B_3`$ over $`B_2`$). But we will have a non-trivial horizontal five-brane (of class $`W_B`$) and so the mentioned condition changes to $`\eta _1+\eta _2+W_B=12c_1(B_2)`$ on the one hand and $`B_3`$ is modified by the blow-up on the other hand. This makes the numerical matching non-trivial. In order to proof the matching of vertical five-branes and three-branes in this generalized set-up we will facilitate the analysis by considering a five-brane wrapping a smooth irreducible curve $`C`$ representing a class $`W_B`$ and thus blow up once the $`F`$-theory base $`B_3`$. We will separate our analysis into parts starting with the simplest case of a heterotic string compactification on $`Z`$ with an $`E_8\times E_8`$ vector bundle corresponding under duality to $`F`$-theory on a smooth $`X`$ (where smooth means that the elliptic fibers of $`X`$ degenerate in codimension one not worse than $`I_1`$ in the Kodaira classification). After explaining how a blow-up of $`W_B`$ in the base $`B_3`$ of $`X`$ changes the Euler number of $`X`$ and therefore the number of three-branes, we follow the strategy of and express the number of three-branes and five-branes in comparable data on the common base $`B_2`$. Then we proceed to models with an A-D-E gauge group and consider a heterotic model with an $`E_8\times V_2`$ vector bundle where $`V_2`$ can be $`E_7,E_6,SU(n)`$ giving corresponding unbroken gauge groups. The dual $`F`$-theory model (assuming no further monodromy effects, which could alter the gauge group, being present there) is described by compactifying on an $`X`$ whose elliptic fibers degenerate of type A-D-E . For simplicity we will admit only that the fibers degenerate over codimension one in $`B_3`$ (which can be done by adjusting the $`𝐏^\mathrm{𝟏}`$ bundle over $`B_2`$ respectively the $`\eta `$ class on the heterotic side). In order to ’fill out’ $`V_2`$ with enough instanton number (measured by the $`\eta `$-bound) we have to perform the $`W_B`$-shift in the $`E_8`$ bundle. One has, just as in the smooth case, again an explicit formula for the Euler number, which was recently proved in . However, as this ansatz is not immediately adapted to the blow-up procedure which destroys the usual $`P^1`$ fibration structure of $`B_3`$, we will proceed in the proof of the five-brane/three-brane correspondence more easily by adopting the strategy of . In the latter one expresses the number of three-branes again via the Euler number, which this time is then in turn expressed by the Hodge numbers, they again by the heterotic data (including the index of bundle moduli ’inside’ the complex structure deformations of $`X`$); then the close similarity of the expression for the index of bundle moduli with the expression for the second Chern class of the vector bundle is used to transport directly the number of five-branes to the $`F`$-theory side. We will reduce our more general situation ($`W_B0`$) to this original case ($`W_B=0`$) by collecting the changes in the Hodge numbers of the fourfold induced by the blow-up procedure resp. by the ’change’ in heterotic bundle moduli (in a sense made precise below) when transported to the $`F`$-theory side; taken together with the deformations of the chosen curve $`C`$ these changes will then cancel out in total. In section 2 we outline the general procedure of the $`\eta `$-shift induced by the occurrence of a non-trivial $`W_B`$. In section 3 we show the three-brane/five-brane correspondence for the case of a smooth fourfold. In section 4 we lay the ground for the treatment of the (codimension one) singular case. We recall some points of the identification of the moduli spaces and the expression of the Hodge numbers of the fourfold in heterotic data; starting from there we show how this already gave the result $`a_f=n_3`$ in the simple case $`W_B=0`$ ( showed this in the smooth case by a different procedure; we build on this type of argument in section 3). Then we derive the (functional, this will be explained below) change in the index of bundle moduli under the shift in $`\eta `$ by $`W_B`$. In section 5 the necessary informations about deformations of $`C`$ are derived. For this we investigate the corresponding question for $`C`$ in the base surface $`B=B_2`$ and in the elliptic surface $``$ which lies in $`Z`$ above $`C`$. In section 6 we put our informations about (functional) changes in the Hodge numbers of the fourfold together and show that they cancel each other thereby reducing the proof of our result to the original case ($`W_B=0`$). 2. Testing $`a_f=n_3`$ with $`W_B`$ turned on We consider the heterotic string on $`\pi :ZB_2`$ with a section $`\sigma `$ and specify a vector bundle $`V=E_8\times V_2`$ where we fix $`V_1`$ to be an $`E_8`$ bundle and $`V_2=E_8,E_7,E_6`$ or $`SU(n)`$. Under the decomposition $`\sigma H^2(B_2)+H^4(B_2)`$, where the base cohomology is pulled back by $`\pi ^{}`$, the second Chern class of $`V_2`$ respectively the fundamental characteristic class of $`E_{8,7,6}`$ bundles, is given by $$\lambda (V_2)=\eta \sigma +\pi ^{}(\omega )$$ where $`\lambda (V_2)=c_2(V_2)`$ for $`V_2=SU(n)`$ and $`\lambda (V_2)=c_2(V_2)/C`$ with $`C=60,36,24`$ for $`E_8,E_7,E_6`$ bundles. Note further that $`\eta H^2(B_2)`$ is arbitrary and $`\omega H^4(B_2)`$ is determined in terms of $`\eta `$. The explicit expressions for the characteristic classes of $`E_8`$ bundles and $`SU(n)`$ bundles are given in for $`E_7`$ and $`E_6`$ bundles see . A corresponding decomposition of the second Chern class of $`Z`$ gives $$c_2(Z)=12c_1\sigma +c_2+11c_1^2.$$ The general condition for anomaly cancellation with five-branes is $$\lambda (V_1)+\lambda (V_2)+W=c_2(Z)$$ where $`W`$ is the cohomology class of the five-branes. As we want to treat the case that one has, besides some elliptic fibers wrapped by five-branes (the case already treated in ), also a wrapped curve $`C`$ in the base let us now assume for the five-brane class a corresponding decomposition in a part consisting of base cohomology (to be considered embedded via $`\sigma `$) and a fiber part $$W=W_B+a_fF$$ where $`W`$ has to be effective , which in the cases considered is equivalent to $`W_B`$ being effective and $`a_f0`$ . Now the $`\sigma H^2(B_2)`$ part of (2.1) gives the condition $$\eta _1+\eta _2+W_B=12c_1$$ On the other hand one gets an expression for the number of five-branes wrapping still just the elliptic fiber of $`Z`$ $$a_f=12+10c_1^2\omega _1\omega _2.$$ This gives also a prediction for the number of three-branes on the $`F`$-theory side which is proportional to the Euler number of the Calabi-Yau four-fold, one has $`\chi (X)/24=n_3`$. Now the fibration structure of the $`F`$-theory base $`B_3`$ is described by assuming the $`𝐏^\mathrm{𝟏}`$ bundle over $`B_2`$ to be a projectivization of a vector bundle $`Y=𝒪𝒯`$, where $`𝒯`$ is a line bundle over $`B_2`$ and the cohomology class $`t=c_1(𝒯)`$ encodes the $`𝐏^\mathrm{𝟏}`$ fibration structure. Now the duality with the heterotic side is in the case $`W_B=0`$ implemented by choosing the $`E_8\times V_2`$ bundle so that the $`\eta `$ class of the $`E_8`$ bundle is given by $`\eta _1=6c_1(B_2)+t`$ and the $`\eta `$ class of the $`V_2`$ bundle by $`\eta _2=6c_1(B_2)t`$. As mentioned, we will now allow five-branes to wrap a curve $`C`$ of class $`W_B`$ in the base $`B_2`$. This modifies the direct relation between the $`\eta `$ classes $`\eta _{1,2}=6c_1\pm t`$ and the $`𝐏^1`$ fibration described by $`t`$. We will describe what makes it possible to have still the relation $`a_f=n_3`$. In accordance with condition (2.1)we must deviate from the $`\eta _{1,2}=6c_1\pm t`$ set-up by redefining $$\eta _1\eta _1W_B.$$ Before we come to the actual computations let us note that in general the relation for the number of three-branes will be modified due to the appearance of the $`F`$-theory analog of $`M`$-theory four-flux which one is even forced to turn on if $`\chi (X)/24`$ is not integral ,. One has the generalized relation $$\frac{\chi (X)}{24}=n_3+\frac{1}{2}_XGG.$$ In particular for $`SU(n)`$ bundles it was expected from an argument given in that $`G^2=_i\frac{1}{2}\pi _{}(\gamma _i^2)`$, i.e there is a relation between four-flux and discrete twist (which appears in the $`SU(n)`$ bundle construction). Such a relation could be proved indirectly by an explicit computation in . Since a vector bundle is determined by specifying the $`\eta `$ class which has to satisfy a bound ,,, let us note here that (the form of) the bound of (for the new $`\eta `$) is not modified if we include a non-zero $`W_B`$ as discussed in . 3. The smooth case As mentioned in the introduction, we consider here the heterotic string on $`Z`$ with an $`E_8\times E_8`$ bundle corresponding to $`F`$-theory on a smooth $`X`$. In order to perform the $`\eta `$-shift in the vector bundle, let us recall that the fundamental characteristic class of an $`E_8`$ bundle is $$\lambda (V)=\eta \sigma 15\eta ^2+135\eta c_1310c_1^2$$ and the anomaly cancellation condition then determines the number of five-branes $`a_f`$ wrapping elliptic fibers of $`Z`$. Now we have two choices to perform the shift either in the first or in the second $`E_8`$ factor. For the shift in $`\eta _1`$ we get $$a_f=c_2+91c_1^2+30t^2+15W_B^245W_Bc_130W_Bt$$ whereas for shifting $`\eta _2`$ we get $$a_f=c_2+91c_1^2+30t^2+15W_B^245W_Bc_1+30W_Bt$$ which for vanishing $`W_B`$ of course reduces to the known expression derived in . Note that if we perform the shift (assuming here $`t0`$) in $`\eta _1`$ the $`\eta `$-bound requires $`W_Bc_1+t`$ whereas for $`\eta _2`$ the bound requires $`tc_1`$. Now if we perform the shift in $`\eta _2`$ then the bound for that bundle requires $`t+W_Bc_1`$ whereas from $`\eta _1`$ comes no further condition to $`t`$ since $`6c_1+t>5c_1`$. On the other hand, the number of three-branes $`n_3=\chi (X)/24`$ to be included for a consistent $`F`$-theory compactification on a smooth $`X`$, as computed in , is $$n_3=12+15_{B_3}c_1(B_3)^3.$$ We will be interested now to see how the effect of wrapping a five-brane on a curve $`C`$ in $`B_2`$ of class $`W_B`$ on the heterotic side, reflected in a blow-up along $`C`$ in $`B_3`$, will change the number of three-branes on the $`F`$-theory side. A blow-up of $`C`$ in $`B_3`$ produces a three-fold $`\pi :\stackrel{~}{B}_3B_3`$ with $$c_1(\stackrel{~}{B}_3)=\pi ^{}c_1(B_3)E$$ where $`E`$ denotes the exceptional divisor, a ruled surface over $`C`$. It is useful to recall here that the well known case of a blow-up of a point on a surface leading to an exceptional $`𝐏^\mathrm{𝟏}`$ of self-intersection number $`1`$ generalizes to a relation (in $`H^4(\stackrel{~}{B_3})`$) for the ruled surface over $`C`$ ($`E^3`$ is here a number which occurs as prefactor of $`l`$) $$E^2=\pi ^{}W_BE^3l$$ where $`l`$ denotes the fiber of the ruled surface $`E`$ with $`El=1`$ (for a proof of this relation cf. ). With this in hand we can proceed and compute the number of three-branes which are expected to match the $`a_f`$ five-branes. What we find is $$n_3=12+15_{B_3}c_1(B_3)^33W_Bc_1(B_3)E^3.$$ Since $`B_3`$ is a $`𝐏^\mathrm{𝟏}`$ bundle over $`B_2`$ obtained by projectivizing a vector bundle, as explained above, the adjunction formula gives $$c_1(B_3)=c_1+2r+t.$$ The triple intersection of the ruled surface $`E`$ is given by (cf. ) $$E^3=_{W_B}c_1(N_{B_3}C)=(c_1(B_3)W_B\chi (C))$$ With $`12=_{B_2}c_1^2+c_2`$ for the rational $`B_2`$ and that $`r(r+t)=0`$ in the cohomology ring of $`B_3`$ (i.e. the sections of the line bundles $`𝒪(1)`$ and $`𝒪(1)𝒯`$ over $`B_2`$ cf. have no common zeros) and noting that $`W_B`$ is embedded via $`W_Br`$ into $`B_3`$ so that the term $`W_Br`$ is actually $`W_Br^2`$, leading after integration over the $`𝐏^\mathrm{𝟏}`$ fibers of $`B_3`$ to $`W_Bt`$, one gets with adjunction $`\chi (C)=W_B(W_Bc_1)`$ the final expression for the number of three-branes expressed in terms of $`B_2`$ data $$n_3=_{B_2}c_2+91c_1^2+30t^2+15W_B^245W_Bc_1+30W_Bt$$ matching the $`a_f`$ heterotic five-branes in case of the shifted $`\eta _2`$ (for the $`\eta _1`$ shift one has to replace $`t`$ by $`t`$ on the $`F`$-theory side). 4. The singular case I: comparison of moduli spaces We will consider here the situation where the fibers of the Calabi-Yau four-fold degenerate of type A-D-E over codimension one (actually over $`B_2`$) in $`B_3`$ corresponding to an unbroken gauge group of the same type which on the heterotic side comes from an $`E_8\times V_2`$ bundle over $`Z`$. The codimension one degeneration is established (cf. ) for the A series by setting $`t=c_1`$, for the D series only in the case $`D_4`$ a codimension one condition can be established for $`t=2c_1`$ and for the E series the conditions are $`t=3c_1,4c_1`$ for $`E_6`$ resp. $`E_7`$. Note that we have to perform the shift always in the $`E_8`$ bundle since the choice of $`t`$ (for codimension one) sets the $`\eta `$ class on its lower bound. Now, in the case of a codimension one degeneration of the elliptic fiber one can write down a general expression for the Euler number of the corresponding Calabi-Yau four-fold (which was first written down in based there on toric computer analysis and proved in ) and is given by $$\chi (X)=288+360_{B_3}c_1(B_3)^3r(G)c(G)(c(G)+1)_{B_2}c_1(B_2)^2$$ where $`r(G)`$ and $`c(G)`$ denoting the rank and the Coxeter number of the A-D-E gauge group $`G`$ respectively. If however now one would like to apply the blow-up procedure on the basis of this formula in order to get the change in the number of three-branes, this could not be done directly since the fibration structure of $`B_3`$ changes under the blow-up. Although one could refine the derivation of (4.1) in the following we will discuss (as considered in the introduction) a different approach to test $`a_f=n_3`$ which will provide us also with the change in the hodge numbers after the blow-up. The shift redefinition described above for $`\eta _1`$ will change of course $`a_f`$ and so better $`n_3`$ too. But this is not the point of what we will follow, as these changes are still captured by the transport provided by the re-expression (cf. below) between the number of bundle moduli and the second Chern class of $`V_2`$. What we will follow is how this transport changes. This is a change caused just by the change in the $`\sigma H^2(B_2)`$ part of $`c_2(V_2)`$, whereas the changes which occurs because of just using ’another’ $`\eta _1`$ occurs in the $`H^4(B_2)`$ part (whose changing influence on $`a_f`$ is still captured by the original construction). Therefore let us first recall some facts about the general comparison of the $`F`$-theory and heterotic moduli spaces and spectra. These will provide us with relations expressing the Calabi-Yau four-fold hodge numbers in terms of heterotic data. We will then show how these relations have to be modified when on the heterotic side five-branes wrapping curves in the base are included. The moduli in a 4D N=1 heterotic compactification on an elliptic CY, as well as in the dual $`F`$-theoretic compactification, break into ”base” parameters which are even (under the natural involution of the elliptic curves), and ”fiber” or twisting parameters; the latter include a continuous part which is odd, as well as a discrete part. In all the heterotic moduli were interpreted in terms of cohomology groups of the spectral covers, and identified with the corresponding $`F`$-theoretic moduli in a certain stable degeneration. For this one uses the close connection of the spectral cover and the A-D-E del Pezzo fibrations. For the continuous part of the twisting moduli, this amounts to an isomorphism between certain abelian varieties: the connected component of the heterotic Prym variety (a modified Jacobian) and the $`F`$-theoretic intermediate Jacobian. The comparison of the discrete part, involving gamma class and four-flux, refines the matching of $`a_f`$ five-branes and $`n_3`$ three-branes, as mentioned above. By working with elliptically fibered $`Z`$ one can extend adiabatically the known results about moduli spaces of $`G`$-bundles over an elliptic curve $`E=T^2`$, of course taking into account that such a fiberwise description of the isomorphism class of a bundle leaves room for twisting along the base $`B_2`$. The latter possibility actually involves a two-fold complication: there is a continuous as well as a discrete part of these data. It is quite easy to see this for $`G=SU(n)`$: in this case $`V_2`$ can be constructed via push-forward of the Poincare bundle on the spectral cover $`C\times _BZ`$, possibly twisted by a line bundle $`𝒩`$ over the spectral surface $`C`$ (an $`n`$-fold cover of $`B_2`$ (via $`\pi `$) lying in $`Z`$), whose first Chern class (projected to $`B_2`$) is known from the condition $`c_1(V_2)=0`$. So $`𝒩`$ itself is known up to the following two remaining degrees of freedom: first a class in $`H^{1,1}(C)`$ which projects to zero in $`B`$ (the discrete part), and second an element of $`Jac(C):=Pic_0(C)`$ (the continuous part; the moduli odd under the elliptic involution). The continuous part is expected to correspond on the $`F`$-theory side to the odd moduli, related there to the intermediate Jacobian $`J^3(X)=H^3(X,𝐑)/H^3(X,𝐙)`$ of dimension $`h^{2,1}(X)`$, so that the following picture emerges (ignoring the Kahler classes on both sides). The moduli space $``$ of the bundles is fibered $`𝒴`$, with fiber $`Jac(C)`$. There is a corresponding picture on the $`F`$-theory side: the moduli space there is again fibered. The base is the moduli space of complex deformations, the fiber is (up to the discrete $`G`$-flux twisting parameters) the intermediate Jacobian, one has $$h^{2,1}(Z)+h^1(Z,adV)+1=h^{3,1}(X)+h^{2,1}(X).$$ Here the number of bundle moduli $`h^1(Z,adV)=n_e+n_o`$, even or odd under the involution coming from the involution on the elliptic fiber, can be computed using a fixed point theorem and then first effectively computing the character-valued index $`I=n_en_o`$. So one gets $$h^1(Z,adV)=I+2n_o$$ where the index $`I`$ is given by an integral over the fixed point set and can be expressed in terms of the characteristic class of $`V`$ (cf. the last section and ) ($`rk=r(V)`$) $$I=rk4(\lambda (V)\eta \sigma )+\eta c_1.$$ Note that this expression applies to vector bundles which are invariant under the involution of the elliptic fiber ($`\tau `$-invariant) which is the case for $`E_8`$, $`E_7`$ and $`E_6`$ bundles (whose characteristic classes $`\lambda (V)`$ were computed using the parabolic bundle construction which includes no additional twist which would break the $`\tau `$-invariance); also $`SU(n)`$ bundles where $`n`$ is even are $`\tau `$-invariant but bundles with $`n`$ odd (since one can twist with a line bundle in the spectral cover bundle construction introducing thereby an additional term into $`\lambda (V)`$ ) are in general not $`\tau `$-invariant, however, the codimension one condition on the $`F`$-theory side (specifying $`t`$) leads to the vanishing of the additional term in $`\lambda (V)`$ and in this case bundles with $`n`$ odd are $`\tau `$-invariant. Note also that one can give an interpretation of all the bundle moduli in terms of even respectively odd cohomology of the spectral surface, including an interpretation of the index as giving essentially the holomorphic Euler characteristic of the spectral surface. More precisely one can identify the number of local complex deformations $`h^{2,0}(C)`$ of $`C`$ with $`n_e`$ respectively the dimension $`h^{1,0}(C)`$ of $`Jac(C):=Pic_0(C)`$ with $`n_o`$. In this way one gets from a spectrum comparison the following relations , (where we assume that the elliptic fiber degenerates over codimension one in $`B_3`$) $$\begin{array}{cc}\hfill h^{1,1}(X)& =h^{1,1}(Z)+1+r\hfill \\ \hfill h^{2,1}(X)& =n_o\hfill \\ \hfill h^{3,1}(X)& =h^{2,1}(Z)+I+n_o+1\hfill \end{array}$$ From these relations one finds (this is recalled briefly below) that $`a_f=n_3`$ in the case that all five-branes wrap elliptic fibers only. For this one first expresses (from the heterotic identification) the Hodge numbers of $`X`$ purely in data of the common base $`B_2`$, then one uses the expression for the index $`I`$ (cf. ) and finally takes into account that on the $`F`$-theory side the $`h^{2,1}(X)`$ classes correspond to modes odd under the $`\tau `$ involution on the heterotic side. For this recall that for an $`SU(n)`$ bundle one has (cf. below for a discussion of the influence of the discrete twisting parameter) $$\begin{array}{cc}\hfill c_2(V)& =\eta \sigma \frac{n^3n}{24}c_1(B_2)^2\frac{n}{8}\eta (\eta nc_1(B_2))\hfill \\ & =\eta \sigma +\omega \hfill \end{array}$$ Furthermore the index $`I=n_en_o`$ is computed as , (cf. also the appendix of ) $$I=n1+\frac{n^3n}{6}c_1(B_2)^2+\frac{n}{2}\eta (\eta nc_1(B_2))+\eta c_1(B_2)$$ so that $`I=I_1+I_2`$ becomes with $`rk=rk_1+rk_2`$, the sum of the ranks of the two vector bundles (so that $`r=8+8rk`$ is the rank of the unbroken gauge group) $$I=rk4(\omega _1+\omega _2)+(\eta _1+\eta _2)c_1$$ leading with (2.1) to the result indicated above. Note that the index for $`SU(n)`$ bundles was computed in and for $`E_8`$ bundles in . Before we proceed to the refinements in the situation $`W_B0`$ let us briefly recall how one proceeds from in the argument for $`n_3=a_f`$ in the case $`W_B=0`$ . From the last expression for the total index one gets $$\begin{array}{cc}\hfill I& =rk4(c_2(V_1)+c_2(V_2))+48c_1(B_2)\sigma +12c_1^2(B_2)\hfill \\ & =rk4(12c_1(B_2)\sigma +c_2(B_2)+11c_1^2(B_2))+4a_f+48c_1(B_2)\sigma +12c_1^2(B_2)\hfill \\ & =rk(48+28c_1^2(B_2))+4a_f\hfill \end{array}$$ From $`\chi (X)/68=h^{1,1}(X)h^{2,1}(X)+h^{3,1}(X)`$ (cf. ), i.e. $$n_3=2+\frac{1}{4}(h^{1,1}(X)h^{2,1}(X)+h^{3,1}(X))$$ and the relations $$\begin{array}{cc}\hfill h^{1,1}(X)& =12c_1^2(B_2)+r\hfill \\ \hfill h^{3,1}(X)& =12+29c_1^2(B_2)+I+n_o\hfill \end{array}$$ which are refinements of (4.1) using <sup>3</sup> we assume that there is only one cohomologically independent section $`\sigma `$ of the elliptic fibration $`\pi :ZB`$ $`h^{1,1}(Z)=h^{1,1}(B_2)+1=c_2(B_2)1=11c_1(B_2)^2`$ and $`\chi (Z)=60c_1^2(B_2)`$ (cf. ) then one computes indeed<sup>4</sup> Note that in principle one has in $`\omega =\omega _{\gamma =0}\frac{1}{2}\pi _{}\gamma ^2`$ a $`\gamma `$ related term which does, in contrast to $`4\omega `$, not occur in the expression for the index (4.1) (here $`\gamma `$ is the discrete twisting parameter, cf. ,,, which in some cases even has to be present); therefore this $`\gamma `$ related term will also appear besides $`a_f`$ in (4.1); nevertheless we have suppressed this term, for we will assume, as motivated in and indirectly proved for the codimension one singular case in , that it corresponds with a four-flux term on the $`F`$-theory side, so that by (2.1) the sought for direct relation between $`a_f`$ and $`n_3`$ is still maintained. $$n_3=2+\frac{1}{4}(12c_1^2(B_2)+16rk+12+29c_1^2(B_2)+I)=a_f$$ So far the original argument. Now, in the case $`W_B0`$ observe that after the redefinition shift has been made, the index $`I=I_1+I_2`$ can be rewritten as $$I=I_{old}W_Bc_1=rk4(\omega _1+\omega _2)+12c_1^2W_Bc_1$$ which is the usual index (of course depending now on the new $`\eta `$) shifted by $`\mathrm{\Delta }I=W_Bc_1`$. In the general case one has besides the geometric and bundle moduli given above also to take into account the possible deformations $`def_ZC`$ of the actually chosen curve $`C`$ (which is wrapped by the five-brane) inside the cohomology class $`W_B=[C]`$ which we are going to describe now. 5. The singular case II: deformations of $`C`$ We will treat first the deformations of $`C`$ in $`B`$ (always considered to be embedded via $`\sigma `$ in $`Z`$) and in the elliptic surface $``$ lying above $`C`$. Note that in the following we will in the case of possible doubt where, like for $`C`$, a self-intersection is to be taken, denote the self-intersection of $`b`$ considered as a curve in the surface $``$ by $`C_{}^2`$ to distinguish it from the self-intersection $`C_{B_2}^2`$ in $`B=B_2`$ 5.1. The deformations of $`C`$ in $`B`$ We have a ’local’ information $`h^0(C,N_BC)`$ about deformations of $`C`$ in $`B`$ as well as a ’global’ one $`\mathrm{def}_{B_2}(C)=h^0(B_2,𝒪(C))1`$. Using Riemann-Roch $$\underset{i=0}{\overset{2}{}}(1)^ih^i(B_2,𝒪(C))=h^0(C,N_{B_2}C)h^1(C,N_{B_2}C)+\chi (B_2,𝒪)$$ we get (with $`\chi (B_2,𝒪)=1`$ from Noether for our rational $`B_2`$) $$\mathrm{def}_B(C)=h^0(C,N_{B_2}C)h^1(C,N_BC)+sh^2(B,𝒪(C))$$ with the local terms $$h^0(C,N_BC)h^1(C,N_BC)=\frac{1}{2}\chi (C)+\mathrm{deg}N_BC=\frac{Cc_1+C^2}{2}$$ Let us investigate the two higher cohomological corrections. First for any curve $`C`$ on a rational surface $`B`$ $$h^2(𝒪_B(C))=h^0(𝒪(KC))=0$$ which can be seen from the exact sequence $$0𝒪_B𝒪_B(C)𝒪_C(C)0$$ whose associated long exact cohomology sequence reads $$\begin{array}{cc}\hfill 0& h^0(B,𝒪_B)h^0(B,𝒪_B(C))h^0(𝒪_C(C))\hfill \\ & h^1(B,𝒪_B)h^1(B,𝒪_B(C))h^1(𝒪_C(C))\hfill \\ & h^2(B,𝒪_B)h^2(B,𝒪_B(C))h^2(𝒪_C(C))\mathrm{}\hfill \end{array}$$ Here vanishing of the last term shown and $`p_g(B)=0`$ show together that $`h^2(𝒪_B(C))=0`$. Now we have to consider the superabundance $`s=h^1(B,𝒪(C))`$ for the cases of $`B=B_2`$ being a del Pezzo surface $`dP_k`$ ($`𝐏^2`$ blown-up in $`k`$ points) or a Hirzebruch surface $`F_n`$ (a $`𝐏^1`$ bundle over $`𝐏^1`$). The long exact sequence shows with $`q(B)=p_g(B)=0`$ that $`h^1(B,𝒪_B(C))=h^1(𝒪_C(C))=h^0(C,K_CN_{C/B})`$ which vanishes certainly for $`0>\mathrm{deg}(K_CN_{C/B})=C(K_B+C)C^2=CK`$ which is guaranteed if we assume $`K`$ ample; thus for $`B_2=F_n`$ with $`n=0,1`$ and $`dP_k`$ with $`k9`$ the superabundance vanishes. In general we have the important consequence $$Cc_1(B_2)>0s=0$$ so that we find under this assumption $$Cc_1(B_2)>0\mathrm{def}_B(C)=h^0(C,N_{B_2}C)=\frac{Cc_1+C^2}{2}.$$ Let us now discuss the case $`B_2=F_2`$. Recall that for $`F_n`$ the Kaehler cone (the very ample classes) equals the positive (ample) classes and is given by the numerically effective classes $`xb+yf=(x,y)`$ with $`x>0,y>nx`$ where $`b=b_{}`$ is the section with $`b^2=n`$ and $`f`$ the fiber. Furthermore note that an irreducible non-singular curve exists in a class $`xb+yf`$ exactly if the class lies in the mentioned cone or is the element $`b=(1,0)`$ of negative self-intersection or one of the elements $`f=(0,1)`$ or $`kb_+`$ (with $`k>0`$ and $`b_+=b_{}+nf=(1,n)`$ of $`b_+^2=+n`$) on the boundary of the mentioned cone; all of these classes together with their positive linear combinations span the effective cone ($`x,y0`$). Note that $`c_1=2b+(n+2)f`$ is positive only for $`F_0`$ and $`F_1`$ whereas for $`F_2`$ it lies only on the boundary of the positive cone. For $`F_2`$ the Kodaira vanishing theorem tells us that $`h^1(B,𝒪(C))=h^1(B,𝒪(KC))=0`$ if $`K+C=(x+2,y+4)`$ is ample, i.e. for $`y>2x,x>2`$; so clearly the superabundance will vanish for all ample $`C=(x,y)`$ (where even $`x>0`$) and for $`f`$. Note that for $`f`$ we see directly that $`\mathrm{def}_{F_n}f=1=e(f)/2+f^2`$ (the expression from (5.1) ) and similarly for $`b`$ with $`n0`$ that $`\mathrm{def}_{F_n}b=0`$ and $`e(b)/2+b^2=1n0`$; so for the case $`B=F_2`$ the formal expression which should give the number of actual deformations has to be modified, and - as this case is in a further respect somewhat exceptional as we will see in the next subsection - we will exclude it. Finally for $`kb_+`$ one sees that $`kb_+c_1(F_2)=4k>0`$ making, according to the argument given above, $`s`$ again vanish in this case although $`c_1(F_2)`$ is not numerically effective in general. 5.2. The deformations of $`C`$ in the elliptic surface $``$ From the Kodaira formula for the canonical bundle of $``$ one has an expression for $`K_{}`$ as a pull-back class (of course also $`F=\pi _{}^{}p`$ with $`pC`$) $$K_{}=\pi _{}^{}K_C+\chi (,𝒪_{})F$$ Then $`\chi (,𝒪_{})`$ is via Noether evaluated as $`\frac{1}{12}e()`$ as $`c_1()`$ has, being a certain number of fibers, vanishing self-intersection. But $`e()=c_1(B_2)C`$ as the elliptic fibration of the Calabi-Yau has the discriminant $`12c_1(B_2)`$ so that $$\chi (,𝒪_{})=c_1(B_2)C$$ Alternatively one can see from adjunction $$c()=c(C)\frac{(1+r)(1+r+2c_1(B_2))(1+r+3c_1(B_2))}{1+3r+6c_1(B_2)}$$ that $$\begin{array}{cc}\hfill c_1()& =(e(C)c_1(B_2)C)F\hfill \\ \hfill e()& =12c_1(B_2)C\hfill \end{array}$$ Note that the number $`d:=c_1(B)C`$ has the following important interpretation: from (5.1),(5.1) one has $`C_{}c_1()=e_Cc_1(B)C_B`$ (as $`c_1()`$ is a pull-back class, i.e. a number of fibers) so that with adjunction $`e_C=C_{}^2C_{}c_1()`$ inside $``$ one gets $$C_{}^2=c_1(B)C_B$$ This gives after (5.1) another important criterion $$Cc_1(B_2)>0def_{}C_{}=0$$ So except for the case $`B=F_2,C=b=b_{}`$ we have no further deformations in the vertical direction; in the mentioned exceptional case one had $`=b\times F`$ showing the obvious deformation. Note that in this case we had from the expression (5.1) that $`e(b)/2+b^2=12=1`$. As in this exceptional case one has one deformation in $`Z`$ and no deformation in the base $`B`$ which the $`F`$-theory side could see directly we will exclude it from our final arguments. Examples To give some examples note that the first three cases $`d=0,1,2`$ leading to $`e()=0,12,24`$ correspond to $`b\times F,dP_9,K3`$. They occur in well-known circumstances if we choose $`B_2=F_n`$ with $`n=2,1,0`$. In that case occurs the mentioned $``$ over the base $`C=b=b_{}`$ of self-intersection $`n`$ of the Hirzebruch surface: the first case gives actually one of the two exceptional divisors (the other is the base $`F_2`$ itself) of the $`STU`$ Calabi-Yau $`𝐏_{1,1,2,8,12}(24)`$ which consists in a ruled surface over the elliptic fiber (this is the just another way to consider the product $`b\times F`$ fibered) ,; the second case gives the $`dP_9`$ studied for example in , which occurs not just in this set-up but over any exceptional curve ($`b^2=1,bc_1(B_2)=1`$; rationality and the second property imply the first) in a del Pezzo base; finally for $`F_0`$ one gets the well-known $`K3`$ fibers, which occur of course more generally also over each fiber of any Hirzebruch base (adjunction shows $`d=2`$). 5.3. The deformations of $`C`$ in $`Z`$ The total deformation space can be considered fibered together out of the pieces investigated so far (cf. ). Concerning $`def_ZC`$ consider $$0N_{C/B}N_{C/Z}N_{B/Z}𝒪_C0$$ the last term being again $`N_{C/}`$. To show that $`H^0(N_{B/Z}𝒪_C)=0`$ in order that $`def_CB=def_CZ`$ corresponds exactly with the already formulated condition that there are no further deformations of $`C`$ in $``$ if $`C`$ has there (!) negative self-intersection: for $`degN_{C/}=C_{}^2<0`$ and $`H^0`$ vanishes just as in the argument for $`s=0`$ given above. A remark about skew curves Although we will not treat this most general case (where one would have to ’connect’ blow-up’s and three-branes on the $`F`$-theory side) let us at least point to the possibility that the total physical moduli space will include even deformations of $`C+a_fF`$ away from the reducibility into base (horizontal) and fiber (vertical) curves, leading to curves lying ’skew’ to the elliptic fibration. Until yet we have considered two possible cases where a heterotic five-brane wraps a curve either of class $`a_fF`$, i.e. of the purely vertical type - a reducible sum of $`a_f`$ fibers, or $`W_B`$, the purely horizontal type wrapping a base curve $`C`$. Of course that choice of a base curve realizing the cohomology class $`W_B`$ involves already considerable freedom in itself; to give an example, for $`B_2=𝐏^2`$ and $`W_B=2L`$ one can choose either a smooth conic, giving a rational curve $`C`$ of multiplicity $`1`$, or two crossing lines or even two times the same line, giving a rational curve $`C`$ of multiplicity $`2`$ (cf. also ). What concerns the distribution of the vertical class $`a_fF`$ in a set of curves realizing it, one has considerable lesser freedom as one can only make the in general disjoint $`a_f`$ fibers coinciding in some subsets corresponding to a decomposition of the number $`a_f`$. So the set-up up to now would lead one in general to consider the wrapping of the reducible curve $`C+a_fF`$. Of course this realizes only a sublocus of very degenerate possibilities for a curve of cohomology class $`C+a_fF`$ as it may be actually possible to find an irreducible curve (of this cohomology class) inside $`Z`$, or more precisely, as we can assume, actually in the elliptic surface $``$ over $`C`$ (cf. also ). This is by no means trivial as for example in the case that the base curve $`C`$ is rational and that $`h^{2,0}()=0`$ resp. $`1`$, i.e. $``$ is either a rational elliptic ($`dP_9`$) surface or a $`K3`$. In that case the class $`D:=C+a_fF`$ is not representable by an irreducible smooth curve because, although it is a section of the elliptic fibration of $``$ and so rational, adjunction shows that $`2=D_{}^2Dc_1()`$; but in the $`dP_9`$ case $`c_1()=F`$ and so the right hand side is $`1+2a_f1`$, i.e. $`a_f=0`$, whereas for $`K3`$ it is just $`2+2a_f`$ showing again $`a_f=0`$. Here we used the fact that $`C_{}^2=1,2`$ on $`dP_9,K3`$ as one sees again from adjunction. Note as a final remark that one has quite a remarkable curve counting. On $`dP_9`$ one has for the number $`n_{1,\beta }`$ of rational curves in the class $`C+\beta F`$, as counted by the Gromov-Witten invariants and evaluated by mirror symmetry , ($`q=e^{2\pi i\tau }`$; $`\theta _{E_8}(\tau )`$, the theta function of the $`E_8`$ lattice being given by $`\frac{1}{2}_{k=even}\theta _k^8(\tau )=E_4(\tau )`$ with the Jacobi theta functions $`\theta _k`$ and the Eisenstein series of weight $`4`$; $`\eta `$ is the Dedekind eta-function) $$\underset{\beta }{}n_{1,\beta }q^\beta =q^{\frac{1}{2}}\frac{\theta _{E_8}(\tau )}{\eta ^{12}(\tau )}$$ Corresponding results for $`\alpha C+\beta F`$ with $`\alpha >1`$ are given in . Similarly in the $`K3`$ fiber $`𝐏_{1,1,4,6}(12)`$ (of the $`STU`$ Calabi-Yau of degree $`24`$) occurring over fibers of $`F_2`$ one has $$n_{\alpha ,\beta }=c_{STU}(\alpha \beta )$$ where $$\underset{n1}{}c_{STU}(n)q^n=\frac{E_4E_6}{\eta ^{24}}(\tau )$$ Note that these invariants can be really number of curves or more generally Euler numbers of the deformation space of (or of the moduli space of flat $`U(1)`$ bundles over) the curve (in general one has to consider a certain obstruction bundle). In particular they don’t tell us whether there is really an irreducible member besides that curve realization of the class $`\alpha C+\beta F`$ which is just given by the reducible curve consisting (for $`\alpha =1`$, say) in the base curve $`C`$ and a number of $`\beta `$ fibers. Further they don’t tell us whether really a correspondingly big deformation space is realized in the elliptic surface $``$, let alone in the Calabi-Yau three-fold; even if an irreducible skew realization exists this does not necessarily mean that this realization is connected (in the sense of going smoothly through supersymmetric realizations) to the reducible realization. 6. The singular case III: computation By now we are prepared with the necessary background, in order to handle the $`\eta `$-shift and see how the relations for the Hodge numbers of the Calabi-Yau four-fold in terms of the heterotic data will change. To keep things as clear as possible we will separate the changes on the heterotic side from those appearing in the Calabi-Yau four-fold. Heterotic Side On the route of our second strategy we found up to now two important things: the shift in the index and the expression for the deformation (the unspecified $`c_1`$ refers to $`B`$) $$\begin{array}{cc}\hfill \mathrm{\Delta }I& =Cc_1\hfill \\ \hfill def_ZC& =\frac{C^2+Cc_1}{2}\hfill \end{array}$$ (Concerning the first relation note that this is the change in the functional form of $`I`$ as a function of $`\eta _1`$, i.e. for the argument to be given we are not interested in the change coming from just applying the usual index expression to the new, shifted $`\eta _1`$. Concerning the second relation note that, as indicated earlier, we exclude the exceptional case $`B=F_2,C=b_{}`$.) F-Theory Side On the F-theory side the heterotic $`\eta `$-shift is understood as blowing up of the corresponding curve $`C`$ in the base of the Calabi-Yau four-fold which leads to a new divisor and so to a new Kaehler class, thus $$h^{1,1}(X)h^{1,1}(X)+1$$ also, this blow up leads to $$g_C=\frac{C^2Cc_1}{2}+1$$ new 3-cycles in $`h^{2,1}(B_3)`$ and also in $`h^{2,1}(X)`$, so $$h^{2,1}(X)h^{2,1}(X)+g_C.$$ Finally the change in $`h^{3,1}(X)`$ accounts for both effects, the index-shift $`\mathrm{\Delta }I=Cc_1`$ (as the index of the bundle moduli maps to the deformations of the fourfold, cf. (4.1) ) and the occurrence of the $`C`$-deformations (which become geometrical on the $`F`$-theory side) $$h^{3,1}(X)h^{3,1}(X)Cc_1+def_BC$$ From the relation (4.1) for the number of three-branes we learn that the three changes in the Hodge numbers cancel out since (cf. (5.1)) $$1g_CCc_1+def_BC=0$$ Thus, by expressing everything in data of the common base $`B_2`$, just as it was done in , we find that the desired matching $`n_3=a_f`$ of the number of heterotic five-branes wrapping elliptic fibers with the $`F`$-theory three-branes still holds in the more general situation with a non-trivial $`W_B`$, as the $`\eta `$-shift reduces this completely to the old argument in the simpler situation of $`W_B=0`$. Let us finally make a remark on the exceptional case. For this let us compute the prediction for $`h^{3,1}(X)`$ from the heterotic side in case we have an $`E_8\times E_8`$ bundle leaving no unbroken gauge group (and so having a smooth $`X`$ on the $`F`$ side) for the case of $`B_2=F_2`$ and wrapping the five-brane on the base $`b`$ of $`F_2`$. Let us assume that $`t=0`$ which means we have $`B_3=𝐏^1\times F_2`$ on the $`F`$-side. Since $`t=0`$ we can either perform the $`W_B`$ shift in the first $`E_8`$ or in the second one. The index computation gives for the unshifted $`E_8`$ bundle $`I=1336`$ and for the shifted bundle $`1216`$ therefore one has $`h^{3,1}(X)=h^{2,1}(Z)+I+1+def_BC=243+1336+1216+11=2795`$. Now an independent computation (using toric geometry) of $`h^{3,1}(X)`$ matches precisely the heterotic prediction. Note that we have used here the ’formal’ expression for $`def_BC`$ which gives $`1`$ (so to speak an ’obstruction’) whereas the actual number of deformations of $`b`$ is $`0`$; further in $`Z`$ the number is even $`+1`$. This shows that this last heterotic deformation in $``$ is, as one expects, not directly visible on the $`F`$-theory side and that on the other hand, concerning the deformations in the common visible $`B=B_2`$, the $`F`$-theory side ’sees’ the ’obstruction’ too, as the $`1`$ is reflected there. We would like to thank A. Klemm, D. Morrison and J. Wahl for discussions. References relax R. Friedman, J. Morgan and E. Witten, “Vector Bundles and F- Theory,” Commun. Math. Phys. 187 (1997) 679, hep-th/9701162. relax B. Andreas and G. Curio, “Three-Branes and Five-Branes in N=1 Dual String Pairs,” Phys. Lett. B417 (1998) 41, hep-th/9706093. relax P. Berglund and P. Mayr,”Heterotic/F-Theory Dulality from Mirror Symmetry”, Adv.Theor.Math.Phys.2 (1999) 1307, hep-th/9811217. relax G. Rajesh, “Toric Geometry and $`F`$-theory/Heterotic Duality in Four Dimensions”, JHEP 12 (1998) 18, hep-th/9811240. relax D.-E. Diaconescu and G. Rajesh, “Geometrical Aspects of Fivebranes in Heterotic/F-Theory Duality in Four Dimensions”, hep-th/9903104. relax R. Donagi, A. Lukas, B. A. Ovrut and D. Waldram, “Non-Perturbative Vacua and Particle Physics in M-Theory”, JHEP 9905 (1999) 018, hep-th/9811168. relax R. Donagi, A. Lukas, B. A. Ovrut and D. Waldram, “Holomorphic Vector Bundles and Non-Perturbative Vacua in M-Theory”, hep-th/9901009. relax R. Donagi, B. A. Ovrut and D. Waldram, “Moduli Spaces of Fivebranes on Elliptic Calabi-Yau Threefolds”, hep-th/9904054. relax B. Andreas and G. Curio, “On Discrete Twist and Four Flux in N=1 Heterotic/F-Theory Compactifications”, hep-th/9908193. relax E. Witten, “On Flux Quantization in M-Theory and the Effective Action,” J. Geom. Phys. 22 (1997) 1, hep-th/9609122. relax K. Dasgupta and S. Mukhi, “A Note on Low Dimensional String Compactifications,” Phys. Lett. B398 (1997) 285, hep-th/9612188. relax G. Curio and R. Donagi, “Moduli in N=1 Heterotic/F-Theory Duality,” Nucl.Phys. B518 (1998) 603, hep-th/9801057. relax P. Berglund and P. Mayr, “Stability of Vector Bundles from F-Theory”, hep-th/9904114. relax S. Sethi, C. Vafa and E. Witten, “Constraints on Low Dimensional String Compactifications,” Nucl. Phys. B480 (1996) 213, hep-th/9606122. relax S. Hosono, A. Klemm, S. Theisen and S.-T. Yau, “Mirror Symmetry, Mirror Map and Applications to Complete Intersection Calabi-Yau Spaces”, Nucl. Phys. B433 (1995) 501, hep-th/9406055. relax B. Andreas, G. Curio and D. Lüst, “N=1 Dual String Pairs and their Massless Spectra”, Nucl. Phys. B507 (1997) 175, hep-th/9705174. relax A. Klemm, B. Lian, S.-S. Roan and S.-T. Yau, “Calabi-Yau fourfolds for M and F-theory compactifications”, Nucl. Phys. B 518 (1998) 515, hep-th/9701023. relax R. Hartshorne, “Algebraic Geometry”, Springer 1977. relax S. Kachru and C. Vafa, “Exact Results for N=2 Compactifications of Heterotic Strings”, Nucl. Phys. B450 (1995) 69, hep-th/9505105. relax S. Katz, P. Mayr and C. Vafa, “Mirror symmetry and exact solution of 4D N=2 Gauge theories I”, hep-th/9706110. relax J.A. Minahan, D. Nemeschansky, C. Vafa and N.P. Warner, “E-Strings and N=4 Topological Yang-Mills Theories” Nucl. Phys. B527 (1998) 581, hep-th/9802168. relax A. Klemm, private communication.
no-problem/9912/astro-ph9912516.html
ar5iv
text
# Pulsar Scintillation Studies and Structure of the Local Interstellar Medium ## 1. Introduction Propagation effects on radio signals from pulsars, such as dispersion and scattering, are very useful in probing the distribution of thermal plasma in the ISM. Studies of Interstellar Scattering (ISS) can be used to understand the distribution and the spectrum of plasma density fluctuations. Observational data from a variety of wavebands (ranging from X-ray to optical) suggest that the Solar system resides in a low-density, X-ray emitting cavity of size $``$ a few 100 pc (Cox & Reynolds 1987; Snowden et al. 1990). It is reasonable to expect the Local Bubble and its environment to play a substantial role in determining the scintillation properties of nearby ($`\stackrel{<}{}`$1 kpc) pulsars. Such pulsars, therefore, form potential tools for studying the structure and properties of the LISM. Scintillation properties of pulsars are studied using their dynamic scintillation spectra $``$ records of intensity variations in the frequency-time plane. Such spectra display intensity patterns that fade over short time intervals and narrow frequency ranges. The average characteristics of scintillation patterns are quantified using the two-dimensional (2D) auto co-variance function (ACF). The ACF is fitted with a 2D elliptical Gaussian, to yield the parameters, $`viz.,`$ decorrelation bandwidth ($`\nu _d`$), scintillation timescale ($`\tau _d`$) and the drift slope ($`dt/d\nu `$). From the present observations, it has been possible to estimate the scintillation properties and ISM parameters with accuracies much better than that which has been possible from most earlier data. ## 2. Modeling the Structure of the Local ISM The behaviour of the line-of-sight$``$averaged strength of scattering, $`\overline{\mathrm{C}_\mathrm{n}^2}`$ , with direction ($`l,b`$) and location $``$ dispersion measure (DM) or distance (D) $``$ forms powerful means of investigating the nature of distribution of plasma density fluctuations in the ISM (Cordes et al. 1991). From our measurements of $`\nu _d`$, we have obtained very precise estimates of $`\overline{\mathrm{C}_\mathrm{n}^2}`$ . There is about two orders of magnitude fluctuations of $`\overline{\mathrm{C}_\mathrm{n}^2}`$ , which is much larger than that predicted by the current models for the $`\mathrm{C}_\mathrm{n}^2`$ distribution in the Galaxy (Cordes et al. 1991). In addition, there is a systematic variation of $`\overline{\mathrm{C}_\mathrm{n}^2}`$ with distance, whereby there is a turnover near $``$ 200 pc, followed by a downward trend up to $``$ 1 kpc. Our analysis also shows that many of the nearby pulsars are more strongly scattered than expected. Further, there are several cases of pulsars with comparable DMs and/or at similar distances showing remarkably different scintillation properties. The strength of such anomalous scattering ($`A_{dm}`$) shows a systematic behaviour with DM and distance. These results suggest that the distribution of scattering material in the LISM is non-uniform, and it is likely to be in the form of a large-scale coherent density structure. The results are modeled in terms of large-scale spatial inhomogeneities in the distribution of radio wave scattering material in the LISM. To explain the observations, we need a 3-component model, where the Solar system resides in a weakly scattering medium, surrounded by a shell of enhanced scattering, embedded in the normal, large-scale ISM (Bhat et al. 1998). The best fit values of the parameters of the local scattering structure are such that the observed trends of $`A_{dm}`$ and $`\nu _d`$ are reproduced. It has an ellipsoidal morphology, and is more extended away from the Galactic plane (Figure 1). The centre has an offset $``$ 20$``$35 pc from the Sun, towards $`215^o<l<240^o`$ and $`20^o<b<20^o`$. The strength of plasma density fluctuations in the shell material ($`10^{0.96}<_0^dC_n^2(z)𝑑z<10^{0.55}\mathrm{pc}\mathrm{m}^{20/3}`$, where $`d`$ is the thickness of the shell) is much larger than that in the interior ($`10^{4.70}<\overline{C_n^2}<10^{4.22}\mathrm{m}^{20/3}`$) and that in the ambient medium ($`\overline{C_n^2}<10^{3.30}\mathrm{m}^{20/3}`$). A detailed comparison study with other pertinent studies of the LISM shows that the morphology of the inferred scattering structure is very similar to that of the Local Bubble, as known from observational data at X-ray, EUV, UV and optical bands. ## 3. Further Extensions of the Local ISM Model Observational data in the recent past have considerably improved our understanding of the structure of the LISM (cf. Breitschwerdt et al. 1998). In particular, there has been clear evidence for interaction between the Local Bubble and nearby Loop I Superbubble (Egger 1998). The Loop I Bubble, with a size $``$ 300 pc and located at $``$ 170 pc towards $`l330^o`$, $`b20^o`$ (Sco-Cen OB association), covers a large region of the sky (angular diameter $`120^o`$), and hence likely to play a substantial role in the dispersion and scintillation of nearby pulsars. In order to examine this, we carried out a critical comparison study of recent scintillation measurements for pulsars whose lines of sight intersect the Loop I Bubble with their predictions from the Local Bubble model. The scintillation data used in our analysis are from Johnston et al. (1998), Bhat et al. (1999) and Cordes (1986). Our analysis reveal large discrepancies between the measurements and predictions, whereby the level of enhanced scattering is much higher than that suggested by the Local Bubble model (Figure 2). The proposed LISM model is further extended by modeling the distribution of turbulent plasma associated with the Loop I Bubble. Scintillation data of pulsars PSR J1744$``$1134 and PSR J1456$``$6843 (at parallax distances of 357 pc and 450 pc, respectively) are used to constrain the strength of scattering in the Loop I boundary. PSR J1744$``$1134 is a very interesting object in this context. Toscano et al. (1999) report precise estimates of parallax and proper motion for this pulsar. With the new distance estimate ($`357\pm 35`$ pc), the second boundary of Loop I is located at mid-way ($``$ 175 pc) towards this pulsar, suggesting that the enhanced level of scattering reported for this pulsar ($`\nu _d`$ $``$ 300 kHz, cf. Johnston et al. 1998) is predominantly due to the Loop I shell. Such enhanced scattering could not have been explained if the pulsar was to be located at 170 pc (indicated by unfilled star in Figure 2), as suggested by the Taylor & Cordes (1993) model, whereby the pulsar has to be located in the interior of Loop I. The inferred strength of scattering in the Loop I shell is found to be somewhat larger than that in the Local Bubble shell. The improved LISM model can successfully explain the enhanced scattering of most pulsars within $``$ 1 kpc (Figure 2). ## References Bhat N.D.R., Gupta Y., Rao A.P. 1998, ApJ, 500, 262 Bhat N.D.R., Rao A.P., Gupta Y. 1999, ApJS, 121, 483 Breitschwerdt D., Freyberg M.J., Trümper J. 1998, Proc. IAU Coll. 166 Cordes J.M. 1986, ApJ, 311, 183 Cordes J.M., Weisberg J.M., Frail D.A., et al. 1991, Nature, 354, 121 Cox D.P., Reynolds R.J. 1987, ARA&A, 25, 303 Egger R. 1998, Lecture Notes in Physics, 506, 287 Johnston S., Nicastro L., Koribalski B. 1998, MNRAS, 297, 108 Snowden S.L., Cox D.P., McCammon D., Sanders W.T. 1990, ApJ, 354, 211 Taylor J.H., Cordes J.M. 1993, ApJ, 411, 674 Toscano M., Britton M.C., Manchester R.N., et al. 1999, ApJ, 523, L171
no-problem/9912/cond-mat9912362.html
ar5iv
text
# The mechanism for the 3×3 distortion of Sn/Ge(111) ## I Introduction The study of the so-called $`\alpha `$-phases (1/3 monolayer adatom coverage) of tetravalent adsorbates on semiconductor surfaces has recently attracted considerable interest, due to the complex and diverse phenomenology displayed by systems that, at a first glance, look very similar, both from a structural and from an electronic point of view. On one side we have Pb/Ge(111) and Sn/Ge(111) , where a transition from $`\sqrt{3}\times \sqrt{3}`$ to $`3\times 3`$ surface periodicity has been observed below $`200`$ K. On the other side, SiC(0001) and K/Si(111):B retain a $`\sqrt{3}\times \sqrt{3}`$ periodicity at all temperatures, but are insulating, in contrast with simple electron counting rules. All the above systems are characterized, in the $`\sqrt{3}\times \sqrt{3}`$ phase, by a narrow and half-filled surface band arising from the dangling bond orbital of the adatom. Narrow metallic bands are highly unstable either against electron-electron instabilities (Mott transition), or against genuine structural distortions aimed at lowering the electronic density of states at the Fermi energy . SiC(0001) and, possibly, also K/Si(111):B appear to belong to the former class, due to the large Coulomb repulsion within the dangling bond orbital of the Si adatom. In line with this expectation, SiC(0001) has been recently predicted to be a surface magnetic Mott insulator based on first-principles calculations . Sn/Ge(111) and Pb/Ge(111) belong instead to the class of surfaces where a large structural distortion, leading to a $`3\times 3`$ periodicity, removes at least some of the original destabilizing metallic character. Although considerable progress has been made in the last years, a complete understanding of the physics underlying the appearence of so different phenomena (Mott transition or atomic distortion) at the surface of otherwise very similar systems, is presently missing. In particular, we lack a microscopic understanding of the reasons that make a system decide for one or the other state. This is particularly relevant also in connection with the possibility that these states may be observed at the surfaces of other, presently unexplored, $`\alpha `$-phases such as Sn/Si(111), Pb/Si(111). Recent work on Sn/Si(111) fails to indicate any sign of transition down to 100 K. Here we focus on Sn/Ge(111), as a prototype of the systems where a large atomic distortion takes place. We investigate the system with first-principles methods, and find that both an undistorted (i.e. structurally $`\sqrt{3}\times \sqrt{3}`$) but magnetic state, and a $`3\times 3`$ structurally distorted state, lower the energy of the originally metallic $`\sqrt{3}\times \sqrt{3}`$ surface. However, the energy gain is larger for the structurally distorted case, explaining the observed low temperature $`3\times 3`$ structure. Moreover, by examining in detail the atomic and electronic structure of the $`3\times 3`$ distortion, we are able to highlight the microscopic mechanisms that drive the transition. ## II Method The Sn/Ge(111) surface has been modelled in a repeated supercell geometry where three bilayer slabs of Ge atoms are separated by equivalently thick vacuum regions. Sn adatoms are placed in the T<sub>4</sub> position of the upper surfaces while dangling bonds of the lower surfaces are saturated by hydrogen atoms. We performed extensive electronic structure calculations for both the $`\sqrt{3}\times \sqrt{3}`$ and the $`3\times 3`$ surfaces, either in the local (spin) density approximation (LSDA) or including gradient corrections (GC) to the energy functional. Norm-conserving pseudopotential in the Kleinman-Bylander form , a plane-wave basis set with 12 Ry energy cutoff, and a $`15\times 15`$ k-points grid to sample the full surface Brillouin zone (SBZ) of the $`\sqrt{3}\times \sqrt{3}`$ phase were used. For the $`3\times 3`$ surface an equivalent SBZ sampling was employed. All systems were structurally relaxed, keeping the bottom Ge-layer and the saturating hydrogen atoms fixed, until the Hellmann-Feynman forces on all other atoms were reduces to less than $`10^2`$ eV/a.u. ## III Results The GC band structure of the unreconstructed Sn/Ge (111) $`\sqrt{3}\times \sqrt{3}`$ surface are reported on the left panel of Fig. 1. The system is metallic with a single predominat, partially occupied, surface band, originating from adatom dangling bonds, in the bulk projected energy gap. Due to the rather small bandwidth, $`w0.6`$ eV, the system presents an instability toward a spin- and charge-density wave of $`3\times 3`$ periodicity. The corresponding ground-state band structure is shown in the middle panel of Fig. 1. In agreement with experimental evidence, the magnetically ordered structure is still metallic, although with a reduced density of states at the Fermi energy. A similar result was obtained earlier for the Si/Si(111) surface where, however, the exchange gap was larger and the resulting structure was insulating. The inclusion in the calculation of GC terms to the energy functional turns out to be necessary to stabilize the magnetic structure. This is not surprising as we expect the underling physics to be rooted in the same aspects (small band width, large coulomb self-interaction) that produce the Mott insulating states in related systems. In addition to this magnetic instability metallic Sn/Ge(111) $`\sqrt{3}\times \sqrt{3}`$ surface is also unstable against a purely structural distortion, where vertical displacements of the adatoms with $`3\times 3`$ periodicity, are accompained by a bond alternation of the underlying substrate. In Fig. 2 the atomic structures of the unreconstructed (left) and reconstructed (right) surface are compared. One of the 3 adatoms rises above the surface while the other two sink deeper in the substrate, the final calculated vertical displacement between the two adatom types being as large as 0.36 Å. The energy gain of the system upon reconstruction is of 9 meV/adatom, to be compared with 1-2 meV/adatom of the magnetic case. The reconstruction is robust with respect to the exchange-correlation scheme used, and LDA and GC give here essentially the same results, indicating that the physics involved is probably well described by a conventional band picture, in spite of the large value of U/W in the surface state band. This point will be addressed later. Our calculated distortion compares favorably with that extracted from very recent x-ray diffraction and also with a previous LDA calculation . The reconstruction develops with no energy barrier: we checked that the energy decreases steadily when the adatom vertical offset is fixed at about 10 % of its final value, the substrate atoms being allowed to relax from the unreconstructed positions. The role of the substrate relaxation is important: if only the adatoms positions are optimized, then the energy gain disappears, and the unreconstructed surface is more stable. Infact, it can be seen from Fig. 2 that the reconstruction pattern involves changes in the (111) bonds-lengths between second and third Ge atomic layers. The reconstruction apparently occurs because the symmetry lowering allows some rehybridization between adatoms, accompained by a bond alternation in the substrate that stabilizes the deformation. The mechanism can be better understood by examining the nature of the Wannier function (WF) of the mid-gap state of the $`\sqrt{3}\times \sqrt{3}`$ surface (approximated here by the wavefunction at the M point, see Fig. 3). The WF originates from the adatom dangling bond but has important contributions from substrate states. In particular it can be seen as an antibonding combination of the adatom dangling bond and the bonding state located in the (111) Ge-Ge bond below it. When the system reconstructs one of the three adatoms becomes inequivalent filling its state completely. By doing so the corresponding (111) Ge-Ge bond is strenghtened (due to the bonding character of the WF in that region) while the Sn-Ge one is weakened (the WF is antibonding there); as a result the adatom rises above the surface. The WF’s centered on the other two adatoms are partially depopulated and an opposite relaxation occurs. To better illustrate this mechanism let us consider a toy system where the relevant structural motif (a Sn adatom and the four Ge atoms underneath) is extracted from the surface and hydrogen atoms are added to saturate Ge dangling bonds. The neutral cluster (Fig. 4, left panel) corresponds to the unreconstructed surface (Fig. 2, left panel) and presents a semioccupied highest molecular orbital (HMO) with antibonding character between Sn and Ge. When the HMO is completely filled, making the cluster negatively charged, all Sn-Ge bonds weaken and the corresponding distances increase. Not much else happens; in particular the already strong vertical Ge-H bond is not modified significantly. The effect is more dramatic in the opposite situation, when the HMO is emptied making the cluster positively charged (Fig. 4, right panel). As expected from its antibonding character, depopulating the HMO strenghtens Sn-Ge bonds shortening their distances, while the lower Ge-H bond is essentially destroyed. Why does Sn/Ge(111) energetically prefer to distort rather than become Mott-Hubbard insulating ? Our results suggest that the $`3\times 3`$ distortion is based on the strong antibonding interaction between the adatom dangling bond and the Ge-Ge bond directly underneath. The fact that the energy gain should come by an alternating hybridization/dehybridization of the surface band with a deeper Ge-Ge bond has two implications. The first is that the large value of U/W inside the surface band is not very relevant to this state, unlike the Mott-Hubbard state. The second is that it shows that band-Jahn-Teller, a terminology often used to describe it, is not a correct characterization for this state. It is rather a bond-density-wave. In this case the energy gain does not come from gap-opening, that is only partial, but from the modulation of the strength of the Ge-Ge bonds under the adatoms. Large adatoms, narrow semiconductor gaps and a deformable lattice favor that. These conditions are not met e.g. in SiC(111), where moreover poor screening enhances the value of electron repulsion (U). In conclusion, we have studied the competition between magnetic and distorted nonmagnetic ground states of Sn/Ge(111). Dominance of the latter has been understood as due to a modulation of the antibonding partnership between the adatom and the underlying Ge-Ge bond. Our calculations were performed on the CINECA Cray-T3E parallel machine in Bologna, using the parallel version of the PWSCF code. Access to the Cray machine has been granted within the Iniziativa Trasversale Calcolo Parallelo of INFM and the initiative Progetti di ricerca di rilevante interesse nazionale of MURST. Sponsorship from INFM/LOTUS and from COFIN97 is also acknowledged.
no-problem/9912/hep-ph9912510.html
ar5iv
text
# Adiabatic Gravitational Perturbation During Reheating ## I Introduction As was realized in \[REFERENCES\] that parametric resonance instability occurs during reheating period when the inflaton field $`\varphi `$ oscillates. Since gravitational perturbation is coupled to the inflaton by the Einstein equation, it may also experience parametric resonance amplification during this stage. This issue has been studied in Refs \[REFERENCES\] and \[REFERENCES\], and recently re-examined by Finelli and Brandenberger. The gravitational potential $`\mathrm{\Phi }`$ can be calculated by solving the linearized Einstein equation, however, in the case of the adiabatic perturbation with scales far outside the Hubble radius (the wavenumber $`k0`$), it is convenient to work with Bardeen parameter, $$\zeta \frac{2}{3}\frac{\mathrm{\Phi }+H^1\dot{\mathrm{\Phi }}}{1+w}+\mathrm{\Phi }.$$ (1) In Eq.(1) the dot denotes the derivative with respective to time, $`H`$ is the Hubble expansion rate and $`w=p/\rho `$ is the ratio of the pressure to the density of the background. In the limit of $`k0`$, the Bardeen parameter satisfies $$\frac{3}{2}(1+w)H\dot{\zeta }=0.$$ (2) During the stage of reheating, Eq. (2) becomes: $$\dot{\varphi }^2\dot{\zeta }=0.$$ (3) Recently Finelli and Brandenberger pointed out that when the inflaton field oscilates, $`\dot{\varphi }=0`$ occurs periodically, so it is possible to have $`\dot{\zeta }0`$. If it happens, the cosmological perturbation will undergo parametric amplification without violating causality. Specifically, they have considered a inflaton model with potential $`V(\varphi )=m^2\varphi ^2/2`$ and solved it numerically. They found that $`\zeta `$ is constant in time. In this paper, we extend their work and consider a general single-field inflaton potential. We examine the evolution of $`\zeta `$ during the reheating stage, and find no changes of $`\zeta `$ in time. To begin with, we derive the equation of motion of the Bardeen parameter $`\zeta `$ in perturbation theory, then we will take two specific models by numerical calculations to illustrate our general analytical result. ## II Perturbation theory and analytic argument Working with the longitudinal gauge, the perturbed metric can be expressed in terms of the gravitational potential $`\mathrm{\Phi }`$ $$ds^2=(1+2\mathrm{\Phi })dt^2a^2(t)(12\mathrm{\Phi })dx_idx^i,$$ (4) where $`a(t)`$ is the scale factor. The perturbed Einstein equation gives, $`\ddot{\mathrm{\Phi }}+3H\dot{\mathrm{\Phi }}+\left[{\displaystyle \frac{k^2}{a^2}}+2(\dot{H}+H^2)\right]\mathrm{\Phi }=\kappa ^2(\ddot{\varphi }+H\dot{\varphi })\delta \varphi ,`$ (5) $`\ddot{\delta \varphi }+3H\dot{\delta \varphi }+({\displaystyle \frac{k^2}{a^2}}+V^{\prime \prime })\delta \varphi =\mathrm{\hspace{0.17em}4}\dot{\mathrm{\Phi }}\dot{\varphi }2V^{}\mathrm{\Phi },`$ (6) $`\dot{\mathrm{\Phi }}+H\mathrm{\Phi }={\displaystyle \frac{1}{2}}\kappa ^2\dot{\varphi }\delta \varphi ,`$ (7) where $`\kappa ^2=8\pi G`$, $`V`$ is inflaton potential, $`\delta \varphi `$ is the perturbation to the field $`\varphi `$, and a prime denotes the derivative with respect to $`\varphi `$. Inserting (7) into (5) one can obtain the equation of motion for $`\mathrm{\Phi }`$, $$\ddot{\mathrm{\Phi }}+(H2\frac{\ddot{\varphi }}{\dot{\varphi }})\dot{\mathrm{\Phi }}+(\frac{k^2}{a^2}+2\dot{H}2H\frac{\ddot{\varphi }}{\dot{\varphi }})\mathrm{\Phi }=0.$$ (8) To eliminate the singularities in the equation above when the inflaton field $`\varphi `$ oscillates, one can make use of Sasaki-Mukhanov variable $$Q\delta \varphi +\frac{\dot{\varphi }}{H}\mathrm{\Phi },$$ (9) then Eq.(8) can be re-written as $$\ddot{Q}+3H\dot{Q}+\left[V^{\prime \prime }+\frac{k^2}{a^2}+2\left(\frac{\dot{H}}{H}+3H\right)^.\right]Q=0.$$ (10) The Bardeen parameter $`\zeta `$ is related to $`Q`$ by $$\zeta =\frac{H}{\dot{\varphi }}Q.$$ (11) In the expanding universe, the inflaton field $`\varphi `$ satisfies the equation of motion, $$\ddot{\varphi }+3H\dot{\varphi }+V^{}=0.$$ (12) Differentiating (12) with respect to $`\alpha \mathrm{ln}a`$ (note that $`\dot{\alpha }=H`$), we get $$H^1(\varphi )^{\mathrm{}}+3\ddot{\varphi }+(V^{\prime \prime }+3\dot{H})H^1\dot{\varphi }=0,$$ (13) where $`()^{\mathrm{}}\frac{d^3}{dt^3}`$. Since $`\dot{H}=\kappa ^2\dot{\varphi }^2/2`$, and $`\ddot{H}=\kappa ^2\dot{\varphi }\ddot{\varphi }`$, we have the following relation $$2H^2\dot{H}\ddot{\varphi }H^2\ddot{H}\dot{\varphi }=0.$$ (14) Substracting (14) from (13), and simplifying it, we can obtain $$\left(\frac{\dot{\varphi }}{H}\right)^{}+3H\left(\frac{\dot{\varphi }}{H}\right)^{}+\left[V^{\prime \prime }+2\left(\frac{\dot{H}}{H}+3H\right)^{}\right]\frac{\dot{\varphi }}{H}=0.$$ (15) Differentiating $`Q=\left(\frac{\dot{\varphi }}{H}\right)\zeta `$ with respect to time $`t`$, we have $`\dot{Q}=\left({\displaystyle \frac{\dot{\varphi }}{H}}\right)^{}\zeta +\left({\displaystyle \frac{\dot{\varphi }}{H}}\right)\dot{\zeta },`$ (16) $`\ddot{Q}=\left({\displaystyle \frac{\dot{\varphi }}{H}}\right)^{}\zeta +2\left({\displaystyle \frac{\dot{\varphi }}{H}}\right)^{}\dot{\zeta }+\left({\displaystyle \frac{\dot{\varphi }}{H}}\right)\ddot{\zeta }.`$ (17) Plugging $`Q`$, $`\dot{Q}`$ and $`\ddot{Q}`$ above into Eq.(10), and making use of Eq.(15), finally we obtain an equation of motion for $`\zeta `$ $$\frac{\dot{\varphi }}{H}\ddot{\zeta }+\left(2\frac{\ddot{\varphi }}{H}2\frac{\dot{H}}{H^2}\dot{\varphi }+3\dot{\varphi }\right)\dot{\zeta }+\frac{k^2}{a^2}\left(\frac{\dot{\varphi }}{H}\right)\zeta =0.$$ (18) Clearly, the solution of Eq.(18) is that $`\dot{\zeta }=0`$ when $`\dot{\varphi }=0`$ (note that $`\ddot{\varphi }0`$ at the time when $`\dot{\varphi }=0`$). On the other hand, for $`\dot{\varphi }=0`$, Eq.(3) gives rise to $`\dot{\zeta }=0`$. We conclude that $`\zeta `$ keeps unchanged during reheating. We should point out that the potential $`V`$ in our analytical proof above is not specified, and our result applies for general single-field inflaton models. ## III Numerical examples To illustrate the analytical result in the last section, we take two specific models as examples by directly solving the perturbed Einstein equation with numerical calculations. The first model is $`V(\varphi )=\lambda \varphi ^4/4`$, the second one is a massive inflaton with self-coupled interaction<sup>*</sup><sup>*</sup>*The model considered by Finelli and Brandenberger \[REFERENCES\] corresponds to the limit of $`\lambda 0`$. , $`V(\varphi )=m^2\varphi ^2/2+\lambda \varphi ^4/4`$ . In the latter model two parameters are introduced. One is the inflaton mass $`m`$, another is the self-interaction coupling constant $`\lambda `$. In our numerical calculations, for this model we take $`m^1`$ to be the units of the time and leave $`\lambda /m^2`$ as a free parameter, at the same time, we only choose $`\lambda /m^2=1\times 10^3m_{pl}^2,1m_{pl}^2,1\times 10^3m_{pl}^2`$ ( where $`m_{pl}`$ is the Planck mass) as illustrations. Fig.1 and Fig.2 present the evolution of $`Q`$ and $`\zeta `$ for these two models, from which we can see that $`Q`$ does not change over a period of time and $`\zeta `$ does not change during the zero crossing of $`\dot{\varphi }`$ in the reheating stage (Note that Eqs.(2) and (3) hold only for the adiabatic perturbation with the wavenumber $`k0`$, and we take $`k=0`$ for simplicity in the numerical calculations.). ## IV Discussion In this paper we have studied the evolution of perturbations in the inflationary cosmology and found no additional growth of gravitational fluctuations due to the oscillating inflaton field during reheating. Our result is valid for any single-field inflaton potential. For the multiple-field models Bassett et al. recently pointed out that there are possibilities of amplification of long wavelength perturbation. This important open issue deserves further study. We thank Robert Brandenberger for discussions. This work was supported in part by the Natural National Science Fundation of China. Figure Captions Fig.1 Evolution of $`Q`$ and $`\zeta `$ as a function of time in the model $`V(\varphi )=\lambda \varphi ^4/4`$. The initial condition is chosen as $`Q=1`$ when $`\varphi =0.2m_{pl}`$. The solid and dashed lines represent the evolution of $`Q`$ and $`\zeta `$ respectively. Time is expressed in units of $`(m_{pl}\sqrt{\lambda })^1`$. Fig.2 Evolution of $`Q`$ and $`\zeta `$ as a function of time for a massive inflaton $`V(\varphi )=m^2\varphi ^2/2+\lambda \varphi ^4/4`$. (a), (b), and (c) correspond to $`\lambda /m^2=1\times 10^3m_{pl}^2,1m_{pl}^2,1\times 10^3m_{pl}^2`$ respectively. Time is in units of $`m^1`$.
no-problem/9912/cond-mat9912219.html
ar5iv
text
# From Néel long-range order to spin-liquids in the multiple-spin exchange model ## I INTRODUCTION The two-dimensional triangular lattice antiferromagnet (2D-TLA) was firstly proposed to be a candidate for the disordered (or spin-liquid) ground-state of the spin-$`\frac{1}{2}`$ Heisenberg model (Anderson et al. in 70’s ). Different approaches failed to support this conjecture, but favor a ground-state with Néel long range order (LRO) . Nevertheless, the lattice frustration on the 2D-TLA attracts great interest of theorists, providing a challenge for exotic antiferromagnets. Recently, the multiple-spin exchange model has been extensively studied as an alternative to the Heisenberg model, showing a rich structure of ground-states . In this model, the ground-state can be ferromagnetic (FM), anti-ferromagnetic (AFM) with Néel LRO, or a spin-liquid (SL). A prospective phase diagram has been given by Misguich et al., who considered two-, four-, and five-spin exchange interactions on the 2D-TLA . They found that a large enough four-spin exchange interaction drives th FM phase into a SL phase. They did not study how the AFM Néel LRO is destroyed by the four-spin exchange interaction, and how the transition between Néel LRO and the short range RVB phase takes place. This question is the main object of this paper. Unhappily there is no exact method allowing the study of the zero temperature phases of such frustrated systems. For a finite system, however, one can always, in principle, represent the eigenstates in the complete basis of spin configurations. This allows one to have a real touch on the exact ground-states of small size systems through numerical computations. The huge number of spin configurations ($`2^N`$) becomes a great obstacle on the way of numerical simulations. On the most recent computers, the largest sample that may be handled in exact diagonalizations has $`6\times 6`$ sites. On the triangular lattice, the Quantum Monte Carlo method is plagued by the well-known sign problem, but a new technique called Stochastic Reconfiguration allows handling samples up to $`12\times 12`$ sites . All these calculations point to Néel LRO, with a sublattice magnetization of the order of $`40\%`$ of the saturated value . Series expansions give a reduced ($`20\%`$) but non-zero sublattice magnetization . On the other hand, in the case of short range correlations, the situation is more straightforward, as soon as the available sizes are of the order of, or larger than the correlation length. This is fortunately the case in the SL phase found in the $`J_2J_4`$ model ($`J_20,J_4>0`$) by Misguich et al. . In this work, we use exact diagonalizations to obtain the exact eigenenergies versus wave vectors and total spin for 12, 16, 19, 21, 24 and 27 site samples of the $`J_2J_4`$ model ($`J_2=1,J_4>0`$) on the 2D-TLA. In the classical limit, the AFM ground-state of the 2D-TLA can be described as a three-sublattice structure, with spins of different sublattices making angles of $`2\pi /3`$. Periodic boundary conditions are compatible with the three-sublattice structure for samples with 12, 21, 24 and 27 sites, but not for the 16 and 19 site samples. Therefore, we use twisted boundary conditions for the 16 and 19 site samples and periodic boundary conditions for the 12, 21, 24 and 27 site ones. ## II THE MODEL: THE MULTIPLE-SPIN EXCHANGE HAMILTONIAN The Hamiltonian of the multiple-spin exchange model is given by $$H=\underset{n}{}(1)^nJ_n(P_n+P_n^1),J_n>0,n2$$ (1) where $`J_n`$ are the $`n`$-spin exchange tunneling probabilities (exchange coefficients), $`P_n`$ and $`P_n^1`$ are the $`n`$-spin exchange operators and their inverse operators, respectively. The alternative sign in the summation over $`n`$ in Eq. 1 comes from the permutation of fermions. In general, the exchange coefficients decrease with increasing $`n`$. The two-spin exchange term gives exactly the Heisenberg Hamiltonian up to a constant, since one has $$P_2=2𝐒_i𝐒_j+\frac{1}{2}$$ (2) where $`𝐬_i`$ and $`𝐬_j`$ are spins localized at site $`i`$ and $`j`$, respectively. The three-spin exchange operator is exactly equivalent to a sum of two-spin exchange operators . Thus, the three-spin exchange term of equation (1) can be absorbed into the two-spin exchange term, as long as $`J_2`$ is replaced by the effective two-spin exchange coefficient $`J_2^{\mathrm{eff}}=J_22J_3`$. Therefore, except for the two-spin exchange, the next most important term is the four-spin exchange. A pure positive two-spin exchange (i.e., the Heisenberg Hamiltonian) on the 2D-TLA gives an AFM phase with Néel LRO, and, as shown in Ref. , a pure four-spin exchange gives a SL phase. In this paper we use the specific properties of the spectra of these different kinds of phases to study the transition from one phase to the other, when the relative weight of the four-Spin Exchange $`J_4`$ increases relatively to the antiferromagnetic two-spin coupling. In the following, all the energies are measured in units of $`J_2^{\mathrm{eff}}=1`$. Little is known on this region of the phase diagram. Previous works are based on a classical approximation , semi-classical spin-wave calculations or mean-field Schwinger-boson results . The classical result predicts a transition from the 3-sublattice Néel state to a 4-sublattice tetrahedral state at $`J_4=0.24`$. Both quantum approaches indicate that the four-spin exchange strongly enhance fluctuations in the 3-sublattice Néel phase. Kubo et. al found that the sublattice magnetization vanishes for $`J_4>0.17`$. In the Schwinger-boson approach , the Néel state is destroyed when $`J_4>0.25`$. These two techniques have a general tendency to underestimate the effects of quantum fluctuations on ordered phases. The exact diagonalization analysis presented here indeed shows that the Néel long-ranged order disappears for a smaller value of $`J_4`$ (the critical value is estimated to be in the interval $`J_4^C0.07\mathrm{}0.1`$). ## III CRITERION TO DISCRIMINATE BETWEEN NÉEL LRO AND SL PHASE: THE SPIN GAP ? ### A Finite-size energy spectrum of the Néel LRO phase In the classical limit, a $`N`$-site 2D-TLA sample with Néel LRO is characterized by a three-sublattice structure with spin $`N/6`$ on each sublattice. Coupling of these three $`N/6`$-spins, gives total spin $`S`$ with $`min\{2S+1,N/2S+1\}`$ degeneracy . In an isotropic antiferromagnet (as the collinear AFM which has equal spin susceptibilities and spin wave velocities) the finite-size total energy depends on the total spin $`S`$ (to first order in $`1/N`$) as: $$E_S=E_0+\frac{1}{2N\chi }S(S+1),$$ (3) where $`E_0=Nϵ_0`$ is the energy of the ground-state in the thermodynamic limit and $`\chi `$ is the isotropic magnetic susceptibility of the sample. In the anisotropic case this equation should be rewritten: $$E_S=E_0+\frac{1}{2N\chi _{}}S(S+1)+\frac{1}{2N}(\frac{1}{\chi _{}}\frac{1}{\chi _{}})S_3^2,$$ (4) where $`S_3`$ is the component of the total spin $`S`$ on the internal symmetry axis of the spin system, and $`\chi _{}`$ and $`\chi _{}`$ are the magnetic susceptibilities on the internal symmetry axis and on the perpendicular plane, respectively. In the broken symmetry picture the symmetry axis is perpendicular to the plane of the spins and $`\chi _{}`$ (respectively $`\chi _{}`$) measures the spin fluctuations orthogonal to (respectively in) the spin plane. Eq. 3 (respectively Eq. 4), is the dynamical equation of a rigid rotator (respectively of a quantum top). Eqs. (3, 4) show that the slopes of the total energy versus $`S(S+1)`$ and $`S_3^2`$ approach zero as $`1/N`$ does when $`N\mathrm{}`$. $`S_3`$ is an internal quantum number dynamically generated, which is not under control in a finite-size study. But the total spin $`S`$ is a good quantum number and the $`N^1`$ scaling of the $`S(S+1)`$ dependence of the total energy versus sample size is interesting because it is more rapid than the scaling law of the order parameter (which goes as $`N^{1/2}`$. ### B Finite-size scaling in the SL phase In a SL phase, contrarily to the Néel LRO phase, the spin gap (i.e. the difference in total energy between ground-states in the $`S=1`$ sector and in the $`S=0`$ sector) does not collapse to zero in the thermodynamic limit. The finite-size scaling law in this second situation is not known exactly, insofar as the ”massive” phase is not characterized precisely. Heuristically, we expect the finite-size spin gap to decrease exponentially to a finite value $`\mathrm{\Delta }(\mathrm{})`$ with the characteristic length $`\xi `$ of the spin-spin correlations. For samples of linear size $`L`$ smaller than the correlation length and in the cross-over regime, there are not enough quantum fluctuations to destroy the sublattice magnetization and the system probably behaves as it were classical (i.e. with a spin gap decreasing as $`N^1`$). The following heuristic law might be used to interpolate between the two behaviors: $$\mathrm{\Delta }(L)=\mathrm{\Delta }(\mathrm{})+\frac{\beta }{L^2}\times exp(L/\xi )$$ (5) ### C Quantum critical regime The use of this heuristic law (Eq. 5) encounters a severe difficulty as soon as the disordered system approaches a quantum critical point: in such a situation the correlation length $`\xi `$ diverges, the gap closes to zero and on a finite-size sample it is impossible to discriminate between such a situation and isotropic Néel LRO (Eq. 3). ### D Numerical results In view of this difficulty we have done a pedestrian finite-size scaling of the spin gap by using the simplest linear $`1/N`$ behavior which probably gives a lower bound of the gap in a SL outside critical points. The physical reason is that we expect the finite-size corrections of the gap value to be smaller in a system with a finite correlation length (no long-range order) than in a LRO Néel phase where it vanishes as $`1/N`$<sup>*</sup><sup>*</sup>*This assumption would be invalid at a critical point where the uniform susceptibility vanishes: In such a case the gap might close as $`1/\sqrt{N}`$. The analysis of section IV shows that the behavior of the system changes rather abruptly from a Néel like spectrum to a spectrum with a very large number of low lying singlets below the first $`S=1`$ state (and potentially a $`T=0`$ residual entropy in the singlet sector). This happens before any decrease to zero of the spin velocity or of the homogeneous spin susceptibility. So, if the true thermodynamical system has indeed a critical point between the two phases, the sizes we are looking at are too small to scrutinize the critical regime. The results extrapolated to $`N\mathrm{}`$ are shown in Fig. 1. Strictly speaking the spin gap never extrapolates to zero except for a pure $`J_2`$ where it is equal to zero within its error bar (At $`J_4=0`$, data of Bernu et al. for $`N=36`$ are added to the present results, see Fig. 1.). Nevertheless these data already show three distinct ranges for the parameter $`J_4`$: for very small $`J_4`$ (below 0.075) Néel LRO is plausible but should be confirmed by another approach. For $`J_4`$ larger than $`0.1`$ a gap certainly opens rapidly with increasing $`J_4`$ and then decrease for $`J_4>0.175`$. The spin gap criterion cannot give more insight on the phase diagram. We will now move to the analysis of the symmetries of the low lying levels of the spectra to characterize more precisely these three phases. ## IV SYMMETRIES OF THE LOW LYING LEVELS IN A NÉEL ORDERED PHASE ### A Theoretical background Firstly we show the low energy spectrum of the pure Heisenberg model on the 21 site sample (Fig. 2). In order to emphasize the low energy structure we have displayed the low energy spectrum minus a rigid rotator energy $`\alpha S(S+1)`$. Let us first concentrate on the lowest part of the energy spectrum in each S sector (solid and open triangular symbols in the figure). This family of levels forms on a finite-size lattice the quantum counterpart of the semi-classical Néel state. These specific states are in the trivial representation of the invariance group of the three-sublattice Néel ordered solution . To be definite: * The Néel ground-state breaks the 1-step translation but is invariant in a 3-step translation: as a consequence the only wave vectors appearing in this family of QDJS (for Quasi-degenerate-joint-states defined by Bernu et al. ) are respectively the center $`𝐤=(0,0)`$ and corners $`\pm 𝐤_\mathrm{𝟎}`$ of the Brillouin zone. * These QDJS belong specifically to the trivial representation of $`C_{3v}`$ as the Néel state itself (i.e. they are invariant in a $`2\pi /3`$ rotation, and in a reflection symmetry). * As the $`\pi `$ rotation symmetry of the lattice is broken in this particular ground-state, these QDJS appear either in the odd or even representation of the 2-fold rotation group (see Ref. for more details). * The numbers and characteristics (quantum numbers) of the QDJS in each $`S`$ sector are precisely fixed by theory : for the 21 sites spectrum displayed in Fig. 2 (as for all sizes that have been studied up to now) the numbers of low lying levels and their quantum numbers correspond exactly to the above-mentioned theoretical predictions. ### B Numerical results The dynamical law given by Eq. 4 is still imperfectly obeyed for the 21 site sample: in particular the generation of the internal symmetry is still imperfect but nevertheless the spectrum of a quantum top could already be anticipated. Above these levels with specific properties, there appear eigenstates with wave vectors belonging to the inside of the Brillouin zone (simple dashes in Fig. 2). A group of such eigenstates with different total spin represents a magnon excitation of the Néel ground-state. As the antiferromagnetic magnons have a linear dispersion law, the softest magnon energy scales as the smallest wave vector accommodated in the Brillouin zone of the finite-size sample. Thus, for sizes large enough, these levels collapse to the ground-state as $`1/\sqrt{N}`$, more slowly than the QDJS which collapse as $`1/N`$ to the thermodynamic Néel ground-state energy. That is the reason of the appearance in Fig. 2 of a quasi gap between the QDJS and the magnon excitations. This hierarchy of low lying levels is a very strong constraint on the finite-size samples spectra. It is perfect for sizes up to 27 and for $`J_4`$ smaller or equal to 0.075 and totally absent An illustration of the striking qualitative differences between spectra in the Néel region and in the spin-liquid phase has been previously published: two $`N=27`$ spectra at $`J_4=0`$ and $`J_4=0.1`$ are displayed in Fig. 2 and Fig. 3 of Ref. . for $`J_4`$ larger or equal to 0.1. * This result, associated to the spin gap behavior, consistently proves that for $`J_4`$ larger or equal to 0.1 the system is in a spin-liquid state with rather short range spin-spin correlations. * For $`J_40.075`$ the structure of the low lying eigen-levels of the spectra are compatible with Néel LRO. BUT as discussed above, it is indeed impossible to precisely point a quantum critical transition within this approach. In view of the spectra, we might speculate that the transition is second order and that it takes place between 0.07 and 0.1. * From $`J_4=0`$ to $`J_40.1`$ we see a softening of the spin-wave velocity consistent with the gradual decrease of the Néel LRO (Fig. 3). However, Fig. 4 shows that a large number of singlet states are already present at low energy when $`J_4=0.1`$. Therefore, the system is certainly no longer in the ordered phase at $`J_40.1`$. We can conclude from Fig. 3 that the spin wave velocity does not vanish at the critical point (even if the precise location of the transition cannot be determined). This has been previously suggested by various analytical approaches . The sizes studied are nevertheless too small to check Azaria’s prediction of an $`O(4)`$ symmetry of the effective field theory at the critical point. ## V THE LOW ENERGY EXCITATIONS OF THE SL PHASE Contrarily to our expectations, the SL phase which appears immediately after the disappearance of the Néel ordered phase is not the phase studied by Misguich et al. . It is indeed a SL phase, with a gap and short range spin-spin correlations. But, as might be seen in Fig. 4, this phase exhibits a very large number of singlets in the magnetic gap and seems in this respect similar to the spin-liquid phase of the Heisenberg model on the kagomé lattice . However, we are not aware of any exponential degeneracy (i.e. $`\mathrm{exp}(N)`$) in the classical MSE model Momoi et al. found a ground-state degeneracy in the classical MSE model but this degeneracy only grows as the exponential of linear size of the system ($`\mathrm{exp}\sqrt{N}`$). Moreover, this classical degeneracy was only found in a region where $`J_2<0`$ is ferromagnetic ($`\frac{1}{4}J_4/|J_2|\frac{3}{4}`$)., as is the case for the classical kagomé antiferromagnet. We suspect that a much larger four-spin exchange parameter will be needed to recover the SL phase studied by Misguich et al.. These data point to the existence of this new phase in a finite range of parameters $`0.075J_40.25`$. However, one cannot disregard the hypothesis that these properties are in fact those of a critical point, with a critical region enlarged by finite-size effects. More work with different methods is needed to clarify this point. ## VI Magnetization Plateaus In an external magnetic field $`B`$ along the $`z`$ axis, the total energy of the state with component $`S_z`$ of the total spin is given by: $$E_B=E_SS_zB$$ (6) The magnetization is determined by the minimum of $`E_B`$ respective to $`S_z`$, which requires $`E_B/S_z=0`$. Therefore, in the isotropic case with Néel LRO, one has $$m=2\chi B1/N$$ (7) where $`m=2S/N`$, is the polarization relative to the saturated magnetization $`N/2`$ and $`\chi `$ is indeed the magnetic susceptibility. We noticed that when the four-spin exchange interaction increases, deviation from Eq. 3 occurs about $`S_z=N/6`$ and $`N/4`$. This is in agreement with the earlier mean-field calculation of Kubo and Momoi who predicted magnetization plateaus at $`m=1/3`$ and $`m=1/2`$ in the $`J_2J_4`$ model. We present the magnetization curve of the 24-site sample in Fig. 5. A small plateau exists at $`1/3`$ magnetization for $`J_4=0`$, and its width first increases slowly as $`J_4`$ increases and then decreases from around $`J_4=0.125`$. This $`1/3`$ plateau has also been found in previous studies (see and references therein) in the pure three-sublattice Néel ordered system. The $`m=1/2`$ magnetization appears at about $`J_4=0.1`$ and its width increases with $`J_4`$. Finally, there still exists a plateau at about $`m=1/2`$ in samples with odd numbers of sites, but it distributes over the two closest positions to the $`1/2`$ magnetization. The $`m=1/3`$ and $`m=1/2`$ plateaus correspond to the classical $`uud`$ and $`uuud`$ ordering of spins (see Momoi et al. for the four-sublattice $`uuud`$ state). These phases are more ”classical” than the zero-field phase, but yet show a decrease of the sublattice magnetization from the classical saturation values. ## VII Conclusion In conclusion, we studied the transition between the Néel ordered state and a Spin Liquid state of the multiple-spin exchange model by means of the exact diagonalization method. The pure three-sublattice Néel ordered phase is gradually destroyed by quantum fluctuations when increasing the 4-spin exchange coupling. The spin wave velocity decreases but apparently remains finite at the transition. The quantum disordered phase tuned by the 4-spin exchange coupling is different from the pure $`J_4`$ phase studied by Misguich et al.. It exhibits low energy singlet excitations, reminding of kagomé SL. This result opens many interesting questions that cannot be answered in the present framework: is this phase a new generic SL phase or a finite-size manifestation of a quantum critical regime ? Are these singlet excitations the “resonon” modes invoked by Rokhsar and Kivelson ? In agreement with previous studies , we find two magnetization plateaus at $`1/3`$ and $`1/2`$ of the full magnetization. These plateaus are associated to the semi-classical $`uud`$ and $`uuud`$ ordering structures. A finite-size scaling on much larger sizes is needed to draw a definite conclusion on this magnetic phase diagram.
no-problem/9912/astro-ph9912126.html
ar5iv
text
# 1 Selected sample: we define as lower red giant branch (lower-RGB) the evolutionary phase between first dredge-up completion and the RGB bump: small mass stars on the lower-RGB should have a well defined set of CNO and Li abundances, distinct from that observed in main sequence stars (MS stars, which have not yet experienced the first dredge up) and upper-RGB stars (hereinafter, upper-RGB stars are those stars first ascending the red giant branch, which are brighter than the RGB bump). ## 1 Introduction Stellar models predict that as a small mass star evolves up the RGB, the outer convective envelope expands inward and penetrates into the CN-cycle processed interior regions (first dredge-up). Approximately, the outer 50% of the star by mass is involved in this mixing that brings to the surface mainly <sup>13</sup>C and <sup>14</sup>N, while the primordial <sup>12</sup>C and fragile, light elements like Li, Be, and B are transported from the surface to the interior. Results for old disk field giants (Shetrone et al. 1993) and metal-poor stars (Sneden et al. 1986) show that the first dredge-up occurs at the predicted luminosities; however, mixing in bright giants is much more extreme than predicted by evolutionary models. On the other side, some further mixing is possible in the latest phases of the RGB (see e.g., Charbonnel 1994, 1995): in fact, after the end of the dredge-up phase is reached, the convective envelope begins to recede, leaving behind a chemical discontinuity. This is subsequently contacted by the H-burning shell, giving rise to the so-called RGB bump. Before this contact is made, the H-burning shell is advancing through a region where there are appreciable composition gradients, which should inhibit any (rotational induced) mixing. Thereafter, since there is no mean molecular weight gradient between the convective envelope and the near vicinity of the shell, it is possible that circulation currents, perhaps driven by meridional circulations activated by core rotation (Sweigart & Mengel 1979) give rise to further mixing. In principle, globular clusters offer a unique opportunity to verify this scheme, since they provide a large number of stars located at the same distance (thus allowing an accurate definition of the evolutionary status of individual stars), having the same age (thus similar masses evolving along the RGB), and, hopefully, the same initial chemical composition. However, surface abundances of globular cluster RGB stars have revealed a complex phenomenology (for a summary, see Kraft 1994), that has defied insofar any attempt of a detailed explanation. This is likely due to the fact that the surface abundances of these stars are significantly affected by several major factors: deep mixing within individual stars, primordial inhomogeneities within a cluster, and perhaps accretion of nuclearly processed material during the early phases of the cluster evolution. Furthermore, it is becoming increasingly clear that the dense environment plays an important role on determining other basic cluster features (like the colour of the horizontal branch), either by causing systematic variations in the basic stellar properties (like e.g., the initial angular momentum), or in favouring pollution of the surface layers of stars by ejecta from other stars, or both. To solve these issues, it is necessary to first understand the evolution of single undisturbed small mass stars in the field, in a restricted range of mass and metal abundance (both of them affecting the luminosity of the RGB bump). ## 2 Sample selection and observations To this purpose, we selected a sample from Anthony-Twarog & Twarog (1994, ATT) for the evolved stars) and Schuster & Nissen (1989) for the main sequence and turn-off stars, plus a few local subdwarfs from Clementini et al. (1998). We restricted to metallicities in the range $`2<`$\[Fe/H\]$`<1`$, in order to: (i) avoid possible massive interlopers (thin disk stars), (ii) have a large number of moderately bright stars with accurate absolute magnitudes, hence clear evolutionary phases, and (iii) avoid complications from large star-to-star abundance variations present among the most metal-poor stars. The resulting sample (62 stars) is shown in Figure 1; evolutionary phases are derived from the position on the $`(by)_0c_{10}`$ and the calassical colour magnitude diagram; M<sub>V</sub> values are from ATT (giants and subgiants) or from Hipparcos (dwarfs). High S/N ($`>100`$), high resolution ($`R>50,000`$) spectra were acquired at the McDonald Observatory (2.7m telescope $`+`$ “2d-coudé” echelle spectrometer, spectral range 3800$`\lambda `$9000 Å) and at ESO (CAT$`+`$CES spectrograph, spectral regions centered at 4230, 4380, 5680, 6300, 6700, and 7780 Å), to measure abundance indicators for Li, C, N, O, Na and Fe. ## 3 Atmospheric parameters and abundance analysis Effective temperatures (T<sub>eff</sub>’s) were derived from the dereddened $`(BV)_0`$ and $`(by)_0`$ colours using colour-T<sub>eff</sub> transformations from Kurucz (1995) models with no overshooting, and reddenings from ATT or Carretta, Gratton & Sneden (1999). Note that no empirical correction was required for these calibrations (see e.g., Castelli, Gratton & Kurucz 1997). Gravities were derived from absolute magnitudes and T<sub>eff</sub>’s, assuming a mass M = 0.85 M for all stars and B.C.’s from Kurucz (1995). Eliminating trends of abundances derived from Fe I lines versus EWs for stars with McDonald spectra, we obtained microturbulent velocities $`v_t`$’s for this subsample and a tight relation $`v_t=f`$(T<sub>eff</sub>,log g). This was used to derive accurate $`v_t`$ values for stars with CAT spectra (having very few Fe lines measured). The abundance analysis was performed using Kurucz (1995) model atmospheres with no overshooting. Further details are in Gratton et al. (2000). Abundances for Fe I, Fe II, O I, Na I, $`\alpha `$elements and heavy elements were derived from measured equivalent widths (EWs). Whenever possible, we complemented the rather few atomic lines measured on CAT spectra with lists of accurate (errors $`3`$ mÅ) and homogeneous EWs from literature (see Gratton, Carretta & Castelli 1997 for references). O abundances were obtained from both forbidden and permitted lines. Abundances from these last (as well as the Na abundances from the 5682-88 Å and 6154-60 Å doublets) include corrections for departures from LTE (Gratton et al. 1999). Average \[O/Fe\] ratios were computed after a small offset of 0.08 dex between the abundances from \[O I\] and O I lines was accounted for. C abundances as well <sup>12</sup>C/<sup>13</sup>C isotopic ratios were derived from spectral synthesis of the G-band around 4300 Å. Synthetic spectra computations of the (0.0) and (1,1) bandheads of the violet band of CN were used to derive N abundances for the 24 program stars with McDonald spectra. Li abundances have been derived from comparison to synthetic spectra, and complemented with data from Pilachowski, Sneden & Booth (1993, PSB), after Fe and Li abundances from their analysis were increased to compensate the offsets (0.09 and 0.21 dex, respectively) with respect to our results, due to our higher T<sub>eff</sub>’s ($`112\pm 32`$ K, from 14 stars in common with PSB). ## 4 Results Results for Li, C, N, O, Na and <sup>12</sup>C/<sup>13</sup>C isotopic ratios are summarized in Figure 2 for the 62 program stars with accurate evolutionary phase, plus 43 stars more having accurate abundances for some of these elements and similarly well defined luminosities from the literature. ¿From Figure 2 we can see that small mass lower-RGB stars (i.e., stars brighter than the first dredge-up luminosity and fainter than the RGB bump) have abundances of light elements in agreement with predictions from classical evolutionary models: only marginal changes occur for CNO elements, while dilution within the convective envelope causes the surface Li abundance to decrease by a factor of $`20`$. A second, distinct mixing episode occurs in most (perhaps all) small mass metal-poor stars just after the RGB bump, when the molecular weight barrier left by the maximum inward penetration of the convective shell is canceled by the outward expansion of the H-burning shell, in agreement with recent theoretical predictions. In field stars, this second mixing episode only reaches regions of incomplete CNO burning: it causes a depletion of the surface <sup>12</sup>C abundance by about a factor of 2.5, and a corresponding increase in the N abundance by about a factor of 4. The <sup>12</sup>C/<sup>13</sup>C is lowered to about 6 to 10 (close to, but distinctly larger than the equilibrium value of 3.5), while practically all remaining Li is burnt. However, an O-Na anti-correlation such as typically observed among globular cluster stars (see Figure 3) is not present in field stars. None of the 29 field stars more evolved than the RGB bump (including 8 RHB stars) shows any sign of O depletion or Na enhancement. This means that in field stars the second mixing episode is not deep enough to reach regions were ON-burning occurs.
no-problem/9912/nucl-th9912043.html
ar5iv
text
# Kaon Condensation in Dense Matter ## Abstract The kaon energy in neutron matter is calculated analytically with the Klein-Gordon equation, by making a Wigner-Seitz cell approximation and employing a $`K^{}N`$ square well potential. The transition from the low density Lenz potential, proportional to scattering length, to the high density Hartree potential is found to begin at fairly low densities. Exact non-relativistic calculations of the kaon energy in a simple cubic crystal of neutrons are used to test the Wigner-Seitz and the Ericson-Ericson approximation methods. All the calculations indicate that by $`4`$ times nuclear matter density the Hartree limit is reached, and as the Hartree potential is less attractive, density for kaon condensation appears to higher than previously estimated. Effects of a hypothetical repulsive core in the $`K^{}N`$ potential are also studied. Kaon condensation in dense matter was suggested by Kaplan and Nelson , and has been discussed in many recent publications . Due to the attraction between $`K^{}`$ and nucleons its energy decreases with increasing density, and eventually if it drops below the electron chemical potential in neutron star matter in $`\beta `$-equilibrium, a Bose condensate of $`K^{}`$ will appear. The kaon-nucleon interaction in vacuo have been described by Brown, Lee, Rho, and Thorsson by an effective Lagrangian based on chiral perturbation theory. The $`K^{}n`$ interaction is well described and is fortunately not affected much by resonances as is the $`K^{}p`$ interaction. Kaiser, Rho, Waas and Weise have used energy dependent $`\overline{K}N`$ amplitudes calculated in a coupled-channel scheme starting from the chiral SU(3) effective Lagrangian. They correct for correlation effects in a nuclear medium and find that $`K^{}`$’s condense at densities above $`4\rho _0`$, where $`\rho _0=0.16`$ fm<sup>-3</sup> is normal nuclear matter density. This is to be compared to the central density of $`4\rho _0`$ for a neutron star of mass 1.4$`M_{}`$ according to the estimates of Wiringa, Fiks and Fabrocini using realistic models of nuclear forces. The condensate could change the structure and affect maximum masses and cooling rates of massive neutron stars. In this letter we calculate the kaon energy in neutron matter using the Wigner-Seitz approximation for the Klein-Gordon equation of kaons in neutron matter. Our formulation is exact in both, the low density and the high density limits. We assume that the kaon-nucleon interaction is via the Weinberg-Tomozawa vector potential $`V(r)`$. In the analysis of Ref. the $`K^+N`$ interaction was also found to be dominated by $`\omega `$ and $`\rho `$ vector mesons. The energy of the kaon-nucleon center-of-mass system with respect to the nucleon mass is then $`\omega =\sqrt{k^2+m_K^2}+V(r)+{\displaystyle \frac{k^2}{2m_N}},`$ (1) where $`m_N=939.5`$ MeV is the neutron mass, $`m_K=494`$ MeV the kaon mass, and $`k`$ is the kaon momentum (we use units in which $`\mathrm{}`$ and $`c`$ are unity) in center-of-mass frame. We have included the recoil kinetic energy of the nucleon assuming that terms of order $`k^4/8m_N^3`$ and higher can be neglected. For a relativistic description of the kaon in a vector potential we employ the following recoil corrected Klein-Gordon (RCKG) equation obtained by quantizing Eq. (1) ($`k=i`$) $`\left\{(\omega V(r))^2+{\displaystyle \frac{m_N+\omega V(r)}{m_N}}^2m_K^2\right\}\varphi =0.`$ (2) The shape of the kaon-nucleon potential is not known. In most of our work we approximate it with a square well: $`V(r)=V_0\mathrm{\Theta }(Rr).`$ (3) Yukawa and repulsive core shaped potentials are less favorable for kaon condensation. The range of the interaction $`R`$ and the potential depth $`V_0`$ are related through the s-wave scattering length: $`a=R{\displaystyle \frac{\mathrm{tan}(\kappa _0R)}{\kappa _0}},`$ (4) where $`\kappa _0^2=(2m_KV_0+V_0^2)m_N/(m_N+m_K+V_0)`$. If $`V_0m_K`$, this reduces to the nonrelativistic result $`\kappa _0^2=2m_RV_0`$, where $`m_R=m_Km_N/(m_K+m_N)`$ is the kaon-nucleon reduced mass. The $`K^{}n`$ scattering length is negative corresponding to a positive $`V_0`$. The kaon-nucleon scattering lengths are, to leading order in chiral meson-baryon perturbation theory, given by the Weinberg-Tomozawa vector term $`a_{K^\pm n}=a_{K^\pm p}/2=\pm {\displaystyle \frac{m_R}{8\pi f^2}}\pm 0.31\mathrm{fm},`$ (5) where $`f90`$ MeV is the pion decay constant. Scalar and other higher order terms have been estimated and we shall use the value $`a_{K^{}n}=0.41`$ fm from . This estimate does not include the $`\mathrm{\Sigma }\pi `$ decay channels which account for the complex parts of the $`K^{}N`$ scattering lengths . The empirical scattering lengths extracted from scattering measurements as well as kaonic atoms are $`a_{K^{}n}=(0.37i0.57)`$ fm and $`a_{K^{}p}=(0.67i0.63)`$ fm. Effective Lagrangians predict $`a_{K^{}p}0.82`$ fm, in perturbation theory. However, it becomes positive due to the presence of the nonperturbative $`\mathrm{\Lambda }(1405)`$ resonance. Interestingly, we can estimate the range of our square well potential assuming that the $`\mathrm{\Lambda }(1405)`$ is a $`K^{}p`$ bound state. As in we assume the $`K^{}p`$ potential to be twice as strong as the $`K^{}n`$ potential given by Eq. (4). In order to form a $`K^{}p`$ bound state with binding energy of $`m_p+m_Km_\mathrm{\Lambda }(1405)=27`$ MeV, and have $`a_{K^{}n}=0.41`$ fm we need $`R0.7`$ fm. For this reason we present results for $`R=0.7`$ fm; however our main conclusions are valid for reasonable values of $`R`$. Using $`a=0.41`$ fm and $`R=0.7`$ fm we obtain $`V_0`$=122 MeV with RCKG and =126 MeV with the non-relativistic. It is much weaker than the $`NN`$ interaction, and has a small relativistic correction. In neutron matter at low densities the kaon energy deviates from its rest mass by the “Lenz” potential proportional to the scattering length $`\omega _{Lenz}m_K={\displaystyle \frac{2\pi }{m_R}}a_{K^{}n}\rho ,`$ (6) which is the optical potential obtained in the impulse approximation. At high densities the kaon energy deviates from its rest mass by the Hartree potential: $`\omega _{Hartree}m_K=\rho {\displaystyle V_{K^{}n}(r)d^3r}={\displaystyle \frac{4\pi }{3}}R^3\rho V_0,`$ (7) which, as shown in , is considerably less attractive. To estimate the transition from the low density Lenz potential to the high density Hartree potential we solve the Klein-Gordon equation for kaons in neutron matter in the Wigner-Seitz (WS) cell approximation. It allows an analytical calculations of the kaon energy which will be compared to numerical calculations on a cubic lattice. The cell boundary condition contains the important scale of the range of kaon-neutron correlations in matter, and gives the correct low density (Lenz) and high density (Hartree) limits. The RCKG equation for s-waves is $`\left\{\left({\displaystyle \frac{m_N+\omega V}{m_N}}\right){\displaystyle \frac{d^2}{dr^2}}+(\omega V)^2m_K^2\right\}r\varphi =0,`$ (8) which has the solution $`ur\varphi =\left\{\begin{array}{ccc}\mathrm{sin}(\kappa r),\hfill & \text{for }rR\text{ }\hfill & \\ Ae^{kr}+Be^{kr},\hfill & \text{for }rR\text{ }\hfill & \end{array}\right\}.`$ (11) where $`\kappa ^2=((\omega +V_0)^2m_K^2)m_N/(m_N+\omega +V_0)`$ and $`k^2=(m_K^2\omega ^2)m_N/(m_N+\omega )`$. By matching the wave function $`u`$ and its derivative at $`r=R`$ we obtain $`{\displaystyle \frac{k}{\kappa }}\mathrm{tan}(\kappa R)={\displaystyle \frac{e^{2kR}+\frac{B}{A}}{e^{2kR}\frac{B}{A}}}.`$ (12) In the WS approximation $`\varphi ^{}(r_0)=0`$, where the cell size $`r_0`$ is given by the density $`\rho =(4\pi r_0^3/3)^1`$. From Eq. (11) this implies $`kr_0={\displaystyle \frac{e^{2kr_0}+\frac{B}{A}}{e^{2kr_0}\frac{B}{A}}}.`$ (13) Eliminating the coefficient $`B/A`$ from Eqs. (12) and (13) gives $`{\displaystyle \frac{k}{\kappa }}\mathrm{tan}(\kappa R)={\displaystyle \frac{e^{2k(Rr_0)}(1kr_0)/(1+kr_0)}{e^{2k(Rr_0)}+(1kr_0)/(1+kr_0)}},`$ (14) which determines $`k`$ and thus the kaon energy. The resulting kaon energy is shown in Fig. (1). At low densities, $`r_0R`$, the kaon energy is $`\omega m_K`$ and $`k0`$. Since also $`kr_01`$, we can expand the r.h.s. of Eq. (14) and find $`{\displaystyle \frac{\mathrm{tan}(\kappa R)}{\kappa }}=R+{\displaystyle \frac{1}{3}}k^2r_0^3+{\displaystyle \frac{1}{5}}k^4r_0^5+\mathrm{}.`$ (15) Now we can extract the kaon energy $`\omega ^2`$ $``$ $`m_K^2=k^2{\displaystyle \frac{m_K}{m_R}}`$ (16) $`=`$ $`{\displaystyle \frac{m_K}{m_R}}4\pi a_{K^{}n}\rho \left(1+{\displaystyle \frac{9}{5}}a_{K^{}n}({\displaystyle \frac{4\pi }{3}}\rho )^{1/3}+\mathrm{}\right).`$ (17) The linear part of Eq. (17) is the Lenz potential, Eq. (6), since $`\omega +m_K2m_K`$ at small $`\rho `$. The next order scales with $`\rho ^{4/3}`$ and it becomes a quarter of the leading Lenz potential at $`\rho \rho _0/16`$. This demonstrates that the Lenz potential of kaons is valid only at very small densities, and is of limited interest for kaon condensation. At the density where $`r_0=R`$ equations (8) and (9) are solved by a constant $`\varphi `$ implying $`\omega =m_KV_0`$, the Hartree energy. At even higher densities the two-body potentials overlap with each other, and the Hartree approximation presumably becomes valid. It gives for $`r_0R`$: $`\omega =m_KV_0\left({\displaystyle \frac{R}{r_0}}\right)^3.`$ (18) Note that this equation is valid in both relativistic and non-relativistic mean field limits; only the value of $`V_0`$ is influenced by relativistic effects in the scattering process. The energies obtained with the Lenz and Hartree approximations are also shown in Fig. (1). The cross-over from the Lenz to the Hartree limit takes place at rather small densities. If the kaon-nucleon potential has a short range repulsive core of radius $`R_c`$, a stronger attractive potential $`V_0`$ is needed at $`R>r>R_c`$ to obtain the same scattering length. In Fig. (1) we show an example of the effect of a repulsive core. The repulsion is chosen such that the interaction has zero volume integral, i.e., the core potential is $`V_c=V_0(R^3/R_c^31)`$. We choose $`R=1`$ fm and $`R_c=R/2`$. In order to obtain the scattering length $`a_{K^{}n}=0.41`$ fm, with non-relativistic kinematics, the attractive potential depth is $`V_0=153`$MeV. At very low densities the kaon energy calculated with the WS approximation, follows the Lenz potential and is not affected by the presence of a repulsive core. At intermediate densities, $`r_0R`$, it is actually lower with than without a repulsive core. However, at higher densities the kaon energy approaches $`m_K`$, the Hartree limit for an interaction with zero volume integral. The presence of a repulsive core will thus further reduce the possibility of kaon condensation. In order to test the WS approximation we consider an extreme limit in which the nucleons are infinitely massive and fixed in a simple cubic lattice. Using non-relativistic kinematics, it is simple to solve for the ground state of the kaon in this lattice. We evaluate the imaginary-time propagator $`\mathrm{exp}[H\tau ]`$ for small $`\tau `$ on a grid, and iterate $`\mathrm{\Psi }(\tau +\mathrm{\Delta }\tau )=\mathrm{exp}[H\tau ]\mathrm{\Psi }(\tau )`$ until convergence to the ground state. This can be done efficiently by starting with a course grid and then using a finer mesh as the iterations proceed. The WS results for non-relativistic kinematics and infinitely heavy nucleons are compared with the lattice results for a $`R=0.7`$ fm square well potential with $`a_{K^{}n}=0.41`$ fm. The structure of the lattice is not important; results for the bcc lattice, for example, fall between the WS and the exact results for the simple cubic lattice. In Fig. (2) we show $`(\omega _Km_k)\rho _0/\rho `$ which equals $`2\pi a_{K^{}n}\rho _0/m_R`$ in the Lenz and $`V_0R^34\pi \rho _0/3`$ in the Hartree limits. We find little difference between the two sets of results throughout the range of densities considered. In Ref. the kaon energy is corrected for correlations in the medium. This effect is analogous to the Ericson-Ericson (EE) correction for pions in a nuclear medium and the Lorentz-Lorenz effect in dielectric medium . It is interesting to compare the kaon energy $`\omega _{EE}`$ obtained with the EE method for a cubic massive neutron lattice. It is given by : $`\omega _{EE}m_K={\displaystyle \frac{1}{m_K}}{\displaystyle \frac{2\pi a\rho }{1a\xi \rho }},\xi ={\displaystyle d^3r\frac{C(r)}{r}},`$ (19) where $`C(r)`$ is the nucleon-nucleon correlation function and $`\xi \rho `$ is the inverse correlation length. In a simple cubic lattice $`\xi \rho r_0=1.54`$, while realistic neutron matter wave-functions calculated in give $`\xi \rho r_01.23`$ at densities $`2\rho _0`$. For comparison $`\xi \rho r_0=0.92`$ in neutron Fermi-gas. At lower densities there is little difference between the EE results for $`\xi \rho r_0=1.54`$ and the WS or the exact results. However, at higher densities, the $`\omega _{EE}`$ is larger than the correct result. Note that the WS and the exact results depend upon the interaction range, while the $`\omega _{EE}`$ depends only on the scattering length $`a`$. For larger values of $`R`$, same $`a`$, the exact and WS energies are lower, further below the EE result, while for smaller $`R`$ they would move up towards or even above the EE result. This is to be expected, since the EE approximation assumes that the kaon interacts with only one nucleon at a time, which is valid when $`r_0R`$. At high densities the EE method is not expected to be useful, however, it does seem to be qualitatively good for $`R0.7`$ fm. The EE results are not excessively sensitive to the value of $`\xi \rho r_0`$. In Fig. (2) we also show the results obtained with $`\xi \rho r_0=1.23`$ appropriate for realistic neutron matter. They cross the Hartree line at $`\rho 4\rho _0`$. Self-energies in dilute systems can be expanded in terms of the scattering length (the so called Galitskii’s integral equations ). Analogous results are obtained in chiral perturbation theory . However, such expansions are valid only when the interparticle distance is much larger than the pair interaction range, so that only the scattering length matters. They generally do not have the correct high density limit, which depends upon the shape of the interaction in addition to the scattering length. The lowest order constrained variational method used to treat strong correlations in nuclear matter and liquid Helium is identical to the WS cell approximation employed here if the healing distance is chosen as $`r_0`$. These methods have the correct low and high density limits, and are meant to provide a good approximation over the entire density range. They are probably less accurate than the low-density expansions in the region where the expansions are valid. For example, consider the low density expansion of the WS energy (Eq. 17). To second order in the scattering length it corresponds to $`\xi \rho r_0=1.8`$, much larger than the realistic values quoted above. Thus it is likely that at densities $`\rho _0`$ the EE results for neutron matter shown in Fig.2 for $`\xi \rho r_0=1.23`$ are more accurate that the WS. However, the possible error in the WS results at $`\rho \rho _0`$ seems to be $`<5\%`$ by comparison. It is also possible to calculate corrections to the Hartree potential at high densities by coupling the kaon motion to phonons. Their estimate at $`\rho =4\rho _0`$ is $`16`$ MeV for the square well potential with 0.7 fm radius. This correction decreases as $`\rho `$ increases. It is larger for Yukawa shaped potentials (same $`a`$ and comparable radius) than the square well, however the Hartree potential is less attractive for the Yukawa shaped interaction. In conclusion, it seems that we can use the Hartree limit to estimate the kaon energy in matter at densities $`4\rho _0`$ where kaon condensation may occur. On this basis it appears from Fig. 1 that, for the electron potential $`\mu _e(\rho )`$ calculated from modern realistic $`NN`$ interactions , without quark drops in matter, kaon condensation is unlikely up to $`\rho =7\rho _0`$, which is just above the estimated range of densities possible in neutron stars. With the $`\mu _e(\rho )`$ used in Ref. , which is larger, the condensation could occur at $`\rho 6\rho _0`$. If dense matter has $`10\%`$ protons, whose interaction with the $`K^{}`$ is believed to be twice as strong, the Hartree potential will be more attractive by $`10\%`$. This together with the realistic $`\mu _e`$, could reduce the condensation density to $`6.5\rho _0`$. If the $`K^{}N`$ interaction has the Yukawa shape, or a repulsive core, that would push the condensation density higher, and if quark drops or hyperons reduce the value of $`\mu _e(\rho )`$ at higher densities as indicated in Fig.1, kaon condensation becomes unlikely. We acknowledge Wolfram Weise for stimulating discussions. The work of VRP is partly supported by the US National Science Foundation via grant PHY 98-00978, the work of JC is supported by the U. S. Department of Energy under contract W-7405-ENG-36.
no-problem/9912/cond-mat9912363.html
ar5iv
text
# Two-gap model for underdoped cuprate superconductors ## Abstract Various properties of underdoped superconducting cuprates, including the momentum-dependent pseudogap opening, indicate a behavior which is neither BCS nor Bose-Einstein condensation (BEC) like. To explain this issue we introduce a two-gap model. This model assumes an anisotropic pairing interaction among two kinds of fermions with small and large Fermi velocities representing the quasiparticles near the M and the nodal points of the Fermi surface respectively. We find that a gap forms near the M points resulting into incoherent pairing due to strong fluctuations. Instead the pairing near the nodal points sets in with phase coherence at lower temperature. By tuning the momentum-dependent interaction, the model allows for a continuous evolution from a pure BCS pairing (in the overdoped and optimally doped regime) to a mixed boson-fermion picture (in the strongly underdoped regime). PACS numbers:74.20.De, 74.20.Mn, 71.10.-w The underdoped cuprates are characterized by a pseudogap opening below a strong doping $`(\delta )`$ dependent crossover temperature $`T^{}(\delta )`$, above the superconducting critical temperature $`T_c(\delta )`$ . By decreasing the doping the temperature $`T^{}`$ increases, while the superconducting critical temperature $`T_c`$ decreases until the insulating state is reached. The different behavior of $`T^{}`$ and $`T_c`$ as doping is varied, finds a counterpart in the different behavior of the coherence energy scale, obtained in Andreev reflection measurements , and the single-particle gap, observed both in angle-resolved photoemission (ARPES) and in tunneling experiments . This has triggered a very active debate on the relevance of a non-BCS superconductivity and of a BCS-BEC crossover in these materials . In particular ARPES shows that below $`T^{}`$ the gap opens around the M points of the Brillouin zone \[i.e. $`(\pm \pi ,0),(0,\pm \pi )`$\] suggesting that $`T^{}`$ can be interpreted as a mean-field-like temperature where electrons start to form local pairs without phase coherence. However, it is also found that below $`T^{}`$ substantial portions of the Fermi surface remain gapless. This behavior can be described neither by BCS nor by Bose-Einstein condensation (BEC) schemes. Instead it is suggestive of strong pairing between the states around the M points and of weak coupling near the zone diagonals. Various other experiments carried below $`T_c`$ show a doping and temperature dependence of the gap anisotropy and therefore are again suggestive of a strong anisotropy in the pairing potential. In this letter we explore a new direction (neither BCS nor BEC) focusing on the consequences of a strongly anisotropic interaction. To this aim we introduce a two-gap model, where strongly paired fermionic states can coexist and interplay with weakly coupled pairs in different regions of the Fermi surface (FS). This line of thinking was partly explored in Refs. , where only the extreme strong-coupling limit of one component was considered. In particular the view of Ref. would allow to describe only the very underdoped region of the cuprate phase diagram. Our approach, instead, turns out to be sufficiently flexible to investigate with continuity the evolution of the bifurcation (taking place around optimal doping in the real systems) between a mean-field-like pairing temperature $`T^{}`$ and the coherence superconducting temperature $`T_c`$. We implement our model by a two-band system with different intraband and interband pairing interactions. One band has a large Fermi velocity $`v_F`$ and a small attraction giving rise to largely overlapped Cooper pairs with weak superconducting fluctuations. On the contrary, the other band has a small $`v_F`$ and a large attraction resulting into tightly bound pairs having strong fluctuations. At variance from the models of mixed fermions and bosons, we keep the fermionic nature of both the weakly and strongly bound Cooper pairs. A possible realization of a strongly momentum-dependent effective interaction in underdoped cuprates has been proposed in connection to the occurrence of a charge instability for stripe formation . It was indeed suggested that the tendency to spatial charge order (which evolves into a spin-charge stripe phase by lowering the doping) gives rise to an instability line. This line $`T_{stripe}^c(\delta )`$ starts from a Quantum Critical Point (QCP) at $`T=0`$ near optimal doping and increses by lowering the doping. By approaching the instability line the pairing is mediated by the strong attractive quasi-critical stripe fluctuations, which affect the states on the FS in a quite anisotropic way: $$V_{\mathrm{eff}}(𝐪,\omega )\stackrel{~}{U}\frac{V}{\kappa ^2+|𝐪𝐪_c|^2i\gamma \omega }$$ (1) where $`\stackrel{~}{U}`$ is the residual repulsive interaction between the quasiparticles, $`\gamma `$ is a damping parameter, and $`𝐪_c`$ is the wavevector of the stripe instability. The crucial parameter $`\kappa ^2`$ is a mass term proportional to the inverse square of the correlation length of charge order $`\xi _c^2`$ and provides a measure of the distance from criticality. At $`T=0`$, in the overdoped regime, $`\kappa ^2`$ is linear in the doping deviation from the critical concentration $`\kappa ^2(\delta \delta _c)`$. On the other hand, in the finite-temperature region above $`\delta _c`$ $`\kappa ^2T`$. In the underdoped regime $`\kappa ^2`$ vanishes approaching the instability line $`T_{stripe}^c(\delta )`$ and extend the singular potential to finite temperatures. For $`\kappa ^20`$, the fermionic states around the M points are such that $`𝐤_F𝐤_{𝐅}^{}{}_{}{}^{}𝐪_𝐜`$ and interact strongly. These are the so-called “hot spots”, they have a low dispersion, and possibly form tightly bound local pairs giving rise to the pseudogaps below $`T^{}T_{stripe}^c(\delta )`$. On the other hand, “cold” states in the arcs of FS around the zone diagonals $`\mathrm{\Gamma }`$-$`Y`$ or $`\mathrm{\Gamma }`$-$`X`$ (nodal points) have larger dispersions and interact more weakly since $`V_{\mathrm{eff}}`$ is now cut-off by $`𝐪_c`$. In the underdoped regime $`\kappa ^20`$ at higher and higher temperatures by lowering the doping and $`V_{\mathrm{eff}}`$ has a more dramatic effect. On the contrary, in going to the optimum and the overdoped region $`V_{\mathrm{eff}}`$ is cut-off first by the temperature and then by the doping itself. All the states then interact more isotropically. — The two-gap model. Irrespectively of the origin of the anisotropy, in the presence of a strong momentum dependence both of ($`i`$) the effective pairing interaction and ($`ii`$) the Fermi velocity, we must allow for enough freedom of the pairing and of its fluctuations in order to capture the relevant physical effects of the anisotropy. Following the above discussion we introduce a simple two-band model for the cuprates. We describe the quasiparticle arcs of FS about the nodal points by a free electron band (labelled below by the index 1) with a large Fermi velocity $`v_{F1}=k_{F1}/m_1`$ and the hot states about the M points with a second free electron band, displaced in momentum and in energy from the first, with a small $`v_{F2}=k_{F2}/m_2`$ (see Fig. 1) . Fig. 1: Sketch of the Fermi surface of underdoped cuprates with quasiparticle arcs (thick solid line) and patches of quasi-localized states (thick dotted line) and the Fermi surface of the two-band spectrum (thin solid line). The assumed electronic spectrum consists therefore of two different free electron dispersions, $`\epsilon _1(𝐩)=𝐩^2/2m_1`$ and $`\epsilon _2(𝐩)=𝐩𝐩_0^2/2m_2+\epsilon _0`$, displaced by a momentum $`𝐩_0`$ and by an energy $`\epsilon _0E_{F1}`$ introduced to allow the chemical potential to cross both bands: $`E_{F1}=\epsilon _0+E_{F2}`$. To connect our two-band structure with the single-band dispersion of the cuprates, we choose $`𝐩_0=(\pm \pi ,0),(0,\pm \pi )`$. This choice gives rise to two branches for the band 2 along the $`x`$ and $`y`$ directions. Moreover, we only consider Cooper pairs of zero total momentum formed by time-reversed momentum eigenstates. Therefore the 2-2 pairs are always formed by $`(𝐤,𝐤)`$ states on portions of the band 2 symmetrically located on opposite sides with respect to the $`\mathrm{\Gamma }(0,0)`$ point. If the pairs have a $`s`$-wave symmetry, the branches along $`x`$ and $`y`$ of the band 2 are equivalent and just one can be considered. On the other hand, in the case of $`d`$-wave pairing, the pairs in the branch along $`x`$ have a different phase from the pairs in the branch along $`y`$ and both branches have to be treated. Since in this paper our main interest is the interplay between strongly and weakly coupled pairs irrespectively of their symmetry, for simplicity we consider the $`s`$-wave problem . The model Hamiltonian for pairing in the two-band system is taken to be $$H=\underset{k\sigma i}{}\epsilon _{ki}n_{k\sigma i}+\underset{kk^{}pij}{}V_{ij}(k,k^{})c_{k^{}+pj}^+c_{k^{}j}^+c_{ki}c_{k+pi}$$ (2) where $`i`$ and $`j`$ run over the band indices 1 and 2 and $`\sigma `$ is the spin index. The interaction is approximated by a BCS-like attraction given by $$V_{ij}(k,k^{})=g_{ij}\mathrm{\Theta }(\omega _0\xi _i(k))\mathrm{\Theta }(\omega _0\xi _j(k^{}))$$ (3) with an energy cutoff $`\omega _0`$. The strongly $`q`$-dependent effective interaction in the particle-particle channel $`V_{\mathrm{eff}}(𝐪=𝐤𝐤^{})`$ of the original single-band system is accounted for by the $`2\times 2`$ scattering matrix $`\widehat{g}`$. The matrix elements $`g_{ij}`$ couple the electrons within the same band ($`g_{11}`$ and $`g_{22}`$) and between different bands ($`g_{12}=g_{21}`$). The self-consistency equation for the superconducting fluctuation propagator in the matrix form is given by $`\widehat{L}=\widehat{g}+\widehat{g}\widehat{\mathrm{\Pi }}\widehat{L}`$, where the particle-particle bubble operator for the two-band spectrum has a diagonal $`2\times 2`$ matrix form with elements $`\mathrm{\Pi }_{11}(q)`$ and $`\mathrm{\Pi }_{22}(q)`$ . The resulting fluctuation propagator is given by $$\widehat{L}(q)=\left(\begin{array}{cc}\stackrel{~}{g}_{11}\mathrm{\Pi }_{11}(q)& \stackrel{~}{g}_{12}\\ \stackrel{~}{g}_{12}& \stackrel{~}{g}_{22}\mathrm{\Pi }_{22}(q)\end{array}\right)^1;$$ (4) where we have defined $`\stackrel{~}{g}_{ij}(\widehat{g}^1)_{ij}`$. It turns out useful to define the temperatures $`T_{c1}^0`$ and $`T_{c2}^0`$ as $`\stackrel{~}{g}_{11}\mathrm{\Pi }_{11}(0,T)=\stackrel{~}{g}_{11}\rho _1\mathrm{ln}\frac{\omega _0}{T}\rho _1\mathrm{ln}\frac{T}{T_{c1}^0}`$, $`\stackrel{~}{g}_{22}\mathrm{\Pi }_{22}(0,T)=\stackrel{~}{g}_{22}\rho _2\mathrm{ln}\frac{\omega _0}{T}\rho _2\mathrm{ln}\frac{T}{T_{c2}^0}`$, where $`\rho _i=m_i/(2\pi )`$ is the density of states of the $`i`$-th band. In the underdoped regime, to emulate the hot and cold points related for instance to the system near a stripe instability, we assume the following relations between the $`g_{ij}`$ elements: $`g_{22}V/\kappa ^2>>g_{11}V/|𝐪_c|^2g_{12}`$. Then one has $`\stackrel{~}{g}_{11}1/g_{11},\stackrel{~}{g}_{22}1/g_{22},\stackrel{~}{g}_{12}g_{12}/(g_{11}g_{22}).`$ In this limit $`T_{c1}^0`$ and $`T_{c2}^0`$ (with $`T_{c2}^0T_{c1}^0`$) assume the value of the two BCS critical temperatures for the two decoupled bands. The mean-field BCS superconducting critical temperature for the coupled system $`T_c^0`$ is defined by the equation $`\text{det}\widehat{L}^1(𝐪=0,T_c^0)=0`$. We obtain $`T_c^0>T_{c2}^0`$ given by $$T_c^0=\sqrt{T_{c1}^0T_{c2}^0}\mathrm{exp}\left[\frac{1}{2}\sqrt{\mathrm{ln}^2\left(\frac{T_{c2}^0}{T_{c1}^0}\right)+\frac{4\stackrel{~}{g}_{12}^2}{\rho _1\rho _2}}\right].$$ (5) — The Ginzburg-Landau approach. The role of fluctuations can be investigated within a standard Ginzburg-Landau (GL) scheme, when both $`\rho _2g_{22}\omega _0<E_{F2}`$ and $`\omega _0<E_{F2}`$. Under these conditions the chemical potential is not affected significantly by pairing. Moreover, in order to remain within the GL approach, we will assume that fluctuations from the BCS result are not too strong. The relevance of the space fluctuations of the order parameter is assessed by the gradient term coefficient $`\eta `$, which provides the momentum dependence of the fluctuation propagator in Eq.(4). In particular the expansion of the particle-particle bubbles, in terms of $`q`$, reads $`\mathrm{\Pi }_{11}(q)\mathrm{\Pi }_{11}(0)\rho _1\eta _1q^2`$ and $`\mathrm{\Pi }_{22}(q)\mathrm{\Pi }_{22}(0)\rho _2\eta _2q^2`$. Here $`\eta _i(i=1,2)`$ is the gradient term coefficient of the $`i`$-th band which, in 2D and for a free electron band, is given by $`\eta _i=(7\zeta (3)/32\pi ^2)v_{Fi}^2/T^2`$, with $`\eta _1\eta _2`$ . In the absence of the interband coupling $`g_{12}`$, $`\eta _1`$ provides the (large) gradient term coefficient corresponding to (the weak) superconducting fluctuations for the band with a large $`v_{F1}`$, while (the small) $`\eta _2`$ corresponds to (strong) superconducting fluctuations for the band 2. For the coupled system near $`T_c^0`$, the coefficient $`\eta `$ in terms of $`\eta _1`$ and $`\eta _2`$ is obtained by evaluating $`(\widehat{L}^1)_{ij}\left(ϵ+\eta q^2\right)`$ in terms of the relative temperature deviation $`ϵ(TT_c^0)/T_c^0`$. We get the expression $$\eta =\frac{\rho _1(\stackrel{~}{g}_{22}\mathrm{\Pi }_{22})\eta _1+\rho _2(\stackrel{~}{g}_{11}\mathrm{\Pi }_{11})\eta _2}{T(\stackrel{~}{g}_{22}\mathrm{\Pi }_{22})d\mathrm{\Pi }_{11}/dTT(\stackrel{~}{g}_{11}\mathrm{\Pi }_{11})d\mathrm{\Pi }_{22}/dT},$$ (6) where all the bubbles are evaluated at $`q=0`$. Using the definitions for $`T_{c1,2}^0`$ and the condition $`\text{det}\widehat{L}^1(𝐪=0,T_c^0)=0`$, the coefficient $`\eta `$ can be explicitly written as $$\eta =\alpha _1\eta _1+\alpha _2\eta _2,\mathrm{with}\frac{\alpha _1}{\alpha _2}=\frac{\stackrel{~}{g}_{12}^2}{\rho _1\rho _2\mathrm{ln}^2(T_c^0/T_{c1}^0)},$$ (7) and $`\alpha _1+\alpha _2=1`$. The presence of a fraction of electrons with a large $`\eta _1`$ increases the stiffness of the whole electronic system (i.e. increases $`\eta `$ with respect to $`\eta _2`$). However when the mean-field critical temperature $`T_c^0`$ is much larger than $`T_{c1}^0`$ the correction to $`\eta _2`$ due to the interband coupling is small. At the same time the Ginzburg number is large implying a sizable mass correction $`\delta ϵ(T)`$ to the “mass” $`ϵ(T)`$ of the bare propagator $`\widehat{L}(q)`$. The renormalized critical temperature $`T_c^r`$, given by the equation $`ϵ(T_c^r)+\delta ϵ(T_c^r)=0`$, is lower than $`T_c^0`$ . By evaluating the renormalized gradient term coefficient $`\eta ^r`$ in the presence of the mass correction, we find that this is still given by Eq. (7) with $`T_c^0`$ replaced by $`T_c^r`$. Therefore $`\eta ^r=\eta (T_c^r)`$ is greater than $`\eta (T_c^0)`$. This result indicates that the mass renormalizations of the fluctuation propagator tends to lower $`T_c`$ and, at the same time, increases the gradient term coefficient $`\eta `$ by increasing the coupling to $`\eta _1`$. As a consequence the effective Ginzburg number is reduced and the system is stabilized with respect to fluctuations allowing for a coherent superconducting phase even in the limit $`\eta _20`$. Within the GL approach we associate the temperature $`T_c^0T_{c2}^0`$ to the crossover temperature $`T^{}`$ and $`T_c^r`$ to the superconducting critical temperature $`T_c`$ of the whole system. In the region $`T_c<T<T^{}`$ the pseudogap is formed in band 2. Superconducting fluctuations only affect band 2 while are immaterial for band 1, where the Fermi surface is mantained until phase coherence sets in. Within the Stripe-QCP scenario the coupling $`g_{22}`$ is related to the singular part of the effective interaction mediated by the stripe fluctuations. $`g_{22}`$ is the most doping dependent coupling and attains its largest value in the underdoped regime, where $`\kappa ^2`$ vanishes at higher temperatures. The regular parts of the interaction $`g_{11}`$ and $`g_{12}`$, being cut-off by $`𝐪_𝐜`$, are instead only weakly doping-dependent. In the region of validity of the GL approach, the explicit calculations show that $`r(\delta )(T_c^0T_c^r)/T_c^0(T^{}T_c)/T^{}`$ is increasing by increasing $`g_{22}`$, i.e., by decreasing doping. For small values of $`r`$ we find that both $`T^{}`$ and $`T_c`$ increase. This regime corresponds to the overdoped and optimally doped region. Specifically, above optimum doping, $`g_{22}`$ and $`g_{11}`$ become comparable and the two lines merge together. For $`r0.25÷0.5`$, $`T_c`$ is instead decreasing while $`T^{}`$ is always increasing by decreasing doping. The large values of $`r`$, which are attained in the underdoped region show that we are reaching the limit of validity of our GL approach. We think, however, that the behavior of the bifurcation between $`T^{}`$ and $`T_c`$ represents correctly the physics of the pseudogap phase, while a quantitative description would require a more sophisticated approach like a RG analysis. — The strong-coupling limit. In the very low doping regime, where $`T^{}`$ has increased strongly, the value of $`g_{22}`$ can be so large to drive the system in a strong coupling regime for the fermions in band 2 ($`\rho _2g_{22}\omega _0>E_{F2}`$). In this case, taking $`\omega _0>E_{F2}`$ the chemical potential is pulled below the bottom of the band 2. In this limit of tightly bound 2-2 pairs, the propagator of the superconducting fluctuations in band 2, i.e. $`L_{22}(q)`$, assumes the form of a single pole for a bosonic particle. Since $`E_{F1}`$ is still the largest energy scale in the problem, the fermionic character of the particles in band 1 is preserved. The critical temperature of the system is again obtained by the vanishing of $`det\widehat{L}^1(q=0)`$ where, however, the chemical potential is now self-consistently evaluated including the selfenergy corrections to the Green function in band 2 and the fermions left in band 1. One gets $`\mathrm{\Pi }_{22}=\rho _2\omega _0/|\mu _2|`$ and $$T_c^0=T_{c1}^0\mathrm{exp}\left[\frac{\stackrel{~}{g}_{12}^2}{\rho _1\rho _2\omega _0}\frac{|\mu _2||\mu _B|}{(|\mu _2||\mu _B|)}\right]$$ (8) where $`\mu _2`$ is the chemical potential measured with respect to the bottom of the band 2 and $`|\mu _B|=\rho _2g_{22}\omega _0`$ represents the bound-state energy. The calculation of $`\mathrm{\Pi }_{22}`$ at small $`q`$ leads to a finite value of $`\eta _2=1/(8m_2|\mu _2|)`$, while the small-$`q`$ expansion of $`det\widehat{L}^1`$ provides the new $`\eta `$ coefficient $$\eta =\eta _1+\rho _1\rho _2\omega _0\frac{g_{22}^2}{|\mu _2|}\mathrm{ln}^2(T_c^0/T_{c1}^0)\eta _2$$ (9) In this strong-coupling case most of the non-mean-field effect has been taken into account by the formation of the bound state occurring at a very high temperature of the order $`\rho _2g_{22}\omega _0`$, which provides the new $`T^{}`$ in this regime. $`\eta \eta _1`$ stays sizable and the fluctuations will not strongly further reduce $`T_c^r`$ with respect to $`T_c^0`$: $`T_cT_c^rT_c^0`$. In this low-doping regime $`\frac{T^{}T_c}{T^{}}`$ approaches its largest values before $`T_c`$ vanishes. The strong coupling limit of our model shares some similarities as well as some important differences with phenomenological models of interacting bosons and fermions . In particular, in the model of Ref. pairs of non interacting electrons are scattered from the Brillouin zone near the nodal points into dispersionless boson states, localized about the M points at an energy $`\epsilon _B`$. The correspondence with our model can be seen by noticing that the tightly bound bosonic states correspond to $`\rho _2g_{22}\omega _0E_{F2}`$ and $`g_{11}=0`$. The tightly bound dispersionless bosonic states are fully incoherent ($`\eta _2=0`$), while the fermionic states are unpaired as long as bosons and fermions are independent. The fermion-boson coupling $`g_{12}`$ effectively introduces an $`e`$-$`e`$ coupling of the order of $`\rho _1\rho _2\omega _0g_{12}^2/\epsilon _B`$ and drives the system superconducting. In this particular limiting case of our model we recover the results of Ref. with an explicit expression for the bosonic level $`\epsilon _B=|\mu ||\mu _B|`$, while in Ref. it is a free phenomenological parameter. — Conclusions. In our paper we have analyzed the pairing properties of the underdoped cuprates in terms of an effective two-gap model, motivated by the strong anisotropy of the band dispersions and of the effective pairing interaction. The crucial schematization was based on the introduction of two bands weakly coupled in order to preserve a substantial distinction between the superconducting order parameter in different regions of the momentum space. This has to be contrasted with more standard approaches producing one single gap (even with complicated momentum dependence) and a common fluctuating order parameter all over the FS. Our approach allows for different fluctuation regimes for pairs in different $`𝐤`$ regions. According to our analysis, the strongly bound pairs forming at an high temperature $`T^{}`$ can experience large fluctuations until the system is stabilized by the coupling with less bound BCS-like states, leading to a coherent superconducting state at $`T_c<T^{}`$. $`T_c`$ and $`T^{}`$ merge around or above optimum doping, where $`g_{22}`$, according to our stripe scenario, becomes of the order of $`g_{11}`$. Our model shares similarities with the fermion-boson models for cuprates to which it reduces in the strong-coupling limit for $`g_{22}g_{11}0`$. The two-gap model considered here applies to a much wider doping region and is more suitable to describe the crossover to the optimal and over-doped regime, where no preformed bound states are present and the superconducting transition is quite similar to a standard BCS transition. Acknowledgments. We acknowledge S. Caprara for helpful discussions.
no-problem/9912/astro-ph9912251.html
ar5iv
text
# Age dating old globular clusters in early-type galaxies ## 1. Introduction Globular clusters can be old. Very old. Sometimes older than the universe. This is then referred to as a cosmological paradox, and it indicates that you are using wrong values for your favorite cosmological parameters. But how old are they really and why do we care? Age dating old stellar populations has never been trivial, but crucial for our understanding of the universe. For example, the ages of Galactic globular clusters are still the reference for the age of the universe. This aspect of the problem was recently discussed in several reviews (e.g. Salaris, Degl’innocenti, & Weiss 1997; Sarajedini, Chaboyer & Demarque 1999 and references therein, see also Weiss et al. in these proceedings). Here, we would like to present the current methods (from a purely observational point of view) to determine ages of star clusters in galaxies beyond the Local group. In other words: how to determine ages from the integrated light of star clusters. These ages are used to date the major epochs of star formation in the host galaxies, and thus to constrain the formation history of these same galaxies. The next section presents age determinations from spectroscopy, with its advantages and problems. In section 3, we present photometric methods to derive relative ages of populations, and their caveats. A few concluding remarks are given in Section 4. ## 2. Spectroscopy Spectroscopy of the integrated light of globular clusters allows the age determinations of individual clusters by measuring various absorption line indices in their spectra. The faint magnitudes of the objects prohibit high-resolution spectroscopy. Furthermore, one would like to compare the results with older/other ones, i.e. use a ‘standard’ system (e.g. the Lick indices) which were measured on low-resolution spectra (6Å to 9Å resolution) in the wavelength range typically ranging from 3800Å to 6500Å, sometimes including the Ca triplet around 8500Å. ### 2.1. The real life In order to spectroscopy globular clusters, one needs to identify them in the first place. Therefore, each spectroscopic survey must be preceeded by a photometric one. All studies are still contaminated to some extend by foreground stars (mainly M stars) and compact background galaxies at low redshift. Out to a distance of $``$ 30–40 Mpc the globular clusters can be (barely) resolved with WFPC2 on HST, which is currently the best method to prevent contamination by foreground stars, when associated with a color selection. However, HST photometry is usually available for a small field only. Multi-color, wide-field photometry, especially over a large color baseline, can select out background galaxies efficiently. Current studies that use both these methods to select their globular cluster candidates have typical contamination rates of less than 10%–20%. Further, the advent of 10m-class telescope allow the spectroscopy of fainter objects, i.e. the choice of “secure” globular candidates within the spectroscopic field increased significantly, making the modern studies even more efficient. Still, the method is time consuming: typical exposure times to get a useful signal-to-noise to derive ages are around 2–3h, even with 10m-class telescopes. Also, the current multiplexity of FORS 1&2 (VLT) and LRIS (Keck) is low with around 20–30 candidates per set-up. Instruments such as VIMOS (VLT) and DEIMOS (Keck) with a multiplexity of around 100–150 objects/set-up will significantly improve the efficiencies of such studies. Once spectra are obtained, absorption line indices are measured. In order to compare these with existing population synthesis models, one uses a standard system of absorption line indices. This standard system was implicitly adopted to be the Lick system (see Trager 1998 for the latest update), in which a large number of early-type galaxies (and globular clusters) were originally measured. The index measurement of Balmer lines are finally compared to population synthesis models and the ages are derived. ### 2.2. Problems and accuracy The last sentence is overly optimistic and hides many problems, some of which are illustrated in Fig. 2. First, H$`\beta `$ is the index mainly used to derive ages. This line index is not a pure age indicator but also sensitive to metallicity (half as much as it is sensitive to age, see Worthey 1994). This is illustrated in Fig. 2 by the Milky Way data which are roughly coeval and should lie on a horizontal line if H$`\beta `$ was a perfect age indicator. In contrast, they describe an almost vertical feature at low metallicities. This is the reason why, in order to determine ages, H$`\beta `$ is generally plotted against a very metal-sensitive feature (here Mg<sub>2</sub>) to understand and take into account the metallicity contribution to H$`\beta `$. Ideally, an age sensitive feature would mainly probe the turn-over in the Herzsprung-Russel diagram. Unfortunately, H$`\beta `$ is very sensitive to blue horizontal branches too. This is especially a problem in metal-poor population (typically exhibiting an extended horizontal branch), but could also falsify result in metal-rich population hosting blue horizontal branch stars. H$`\beta `$ is therefore sensitive to some extend to the second parameter effect. We note here that Worthey (1994) did not include any blue horizontal branches in his models. Finally, H$`\beta `$ can be contaminated by line emission, although this is less of a concern in the case of old globular clusters but rather for galaxies with some recent star formation. And last it should be noticed that the “narrow” definition of H$`\beta `$ is not well suited to measure the broad Balmer lines of young population. The second big practical problem is the accuracy with which H$`\beta `$ is currently measured. Figure 2 illustrates that current typical errors for distant globular clusters are of the order of a factor 2 in age. Spectra with higher signal-to-noise must be obtained. Previous studies were either limited by the telescope size, or by the blue response of the used CCDs, or simply optimized for metallicity measurements between 5000Å and 6000Å. Future studies must concentrate on getting enough signal-to-noise below 5000Å which is feasible with the advent of 10m-class telescopes. Also, if the goal is to get a relative age difference between sub-populations of (rather than individual) globular clusters, the measurement of enough representative clusters currently allows a determination to within 2–3 Gyr. The third problem, already mentioned, is the models to which the data are compared. Current population synthesis models do not agree on the absolute ages derived from line indices. Worse, they also influence the relative ages: i.e. the spacing between the different isochrones in Fig. 2 varies from model to model. The three points above should make clear that in order to get relative ages between individual clusters or cluster sub-populations one needs to work on the following points. Other age sensitive features should be used (e.g. higher-order Balmer lines) with a new definition of the indices, perhaps at higher resolution, in order to avoid metal lines in the definition of the bands. These features need to be measured with a higher signal-to-noise than is currently done. And finally, the population synthesis models need to be brought in agreement with each other. ## 3. Photometry Spectroscopy being extremely time consuming, ways of measuring ages for old globular clusters from photometry were explored. ### 3.1. The basic idea Broad-band colors suffer from the well known age-metallicity degeneracy. That is: younger ages can be compensates by higher metallicities. The goal is therefore to find two photometric quantities that depend differently on age and on metallicity in order to break the degeneracy. Broad-band colors at old ages are all much more metallicity sensitive than they are age sensitive, and any combination (from U$``$V to V$``$K) plotted against each other is unable to separate ages. A solution to this problem was to combine a color with a magnitude. Colors are, as mentioned above, metallicity sensitive, while magnitudes are rather age sensitive. Plotting the one against the other breaks the age-metallicity degeneracy and allows the determination of relative ages. ### 3.2. Some results and caveats As for spectroscopy, this method sounds easier than it is in reality. First, the individual magnitudes depend of course primarily on the mass of the globular clusters. The masses being unknown or dependent on the exact distance, mass-to-light ratio etc… this method cannot be applied to individual clusters. However, one can determine a mean color and a “mean” magnitude for a globular cluster population. The mean color is just the peak of the color distribution of the cluster population. The “mean” magnitude is taken to be the turn-over magnitude of the luminosity function, corresponding to the characteristic mass of the cluster population (see Kissler-Patig; Miller; Fritze-von Alvensleben; McLaughlin, all in these proceedings). The exact value of this characteristic mass is unknown to within 10%–20%, and usually the distance to the objects is uncertain by a similar amount, so that absolute ages cannot be derived. The method remains, however, useful to compare the ages of different sub-populations (e.g. the case of NGC 4472 by Puzia et al. 1999). One assumption is that both sub-population have the same characteristic mass implying that color and magnitude depend on age and metallicity only. This assumption seems supported by current theory and observations (see above reference) but can also be checked by comparing the age difference between the sub-populations, derived from various filter combinations. The results from various bands will only agree if the quantities are indeed only dependent on age and metallicity (fully taken into account by the models), and systematic differences will appear if the characteristic masses of the two compared populations differ. Figure 3 illustrates the method. It compares the result derived with respect to two different population synthesis models. As the error bars illustrate, it is rather easy to derive a mean color for a sub-population, however the “mean” magnitude is much harder to measure and requires a large sample of clusters with good photometry and knowledge of the finding incompleteness as a function of cluster magnitude and background luminosity. In this case, the relative age between the two populations can be derived to within 1-2 Gyr. Note, however, that even the age difference depends on the population synthesis models used for the comparison. We picked two extreme cases that illustrate that the uncertainties in the models do not allow a determination of the relative ages to better than 3-4 Gyr. In one case the luminosity is, still in the I band, dependent on metallicity, while in the other case such a dependence exist in the V band but disappears in the I band (as seen from the almost horizontal isochrones). The consequence is, that for the second model (Worthey 1994) the results from the V band indicate that only a marginal difference in age exists between the metal-poor and metal-rich clusters, while the results in the I band seem to favor an older metal-rich population. Overall, the comparison (taken from Puzia et al. 1999) supports coeval metal-poor and metal-rich populations to within the uncertainties (3-4 Gyr). Similar, perhaps a little less rigorous, studies came to comparable conclusions. Kissler-Patig et al. (1997) found the two sub-populations in NGC 1380 to be coeval with weak evidence that the metal-rich population might be 3-4 Gyr younger. Kundu et al. (1998) find a metal-rich population younger by 3-6 Gyr than the metal-poor population in M 87. ## 4. Concluding remarks If there is one thing to remember: getting ages of old globular clusters from integrated properties is tough! But it can be done. And although the current methods still have limits, there is good hope that in the near future we will be able to get relative ages of individual clusters to within 2-3 Gyr, and relative ages between groups of clusters to within 1–2 Gyr. Differences in the models currently limit the accuracy with which relative ages can be determined to 3–4 Gyr. Absolute ages await an agreement between the different stellar population models (including a better knowledge of stellar evolution), and is probably to be expected at earliest in a decade. When individual ages are required, spectroscopy is the only possible method. However, for relative ages of entire populations, photometry can be used. The latter will also allow to determine relative differences in the characteristic mass between cluster sub-population. #### Acknowledgments. I would like to thank the organizers and in particular Ariane Lançon for initiating and organizing such a pleasant meeting. Special thanks to Claudia Maraston for her patience in trying to convey some of her expert knowledge on population synthesis models to a dum observer, and for her plot. Thanks also to Thomas Puzia, responsible for the figures in the photometry section. ## References Kissler-Patig, M., Richtler, T., Storm, J., & Della Valle, M. 1997, A&A, 327, 503 Kundu, A., Whitmore B.C., Sparks, W.B. et al. 1999, ApJ, 513, 733 Maraston, C. 1998, MNRAS, 300, 872 (and electronic updates) Puzia, T.H., Kissler-Patig, M., Brodie, J.P., & Huchra, J.P. 1999, AJ December Salaris, M., Degl’innocenti, S., & Weiss, A. 1997, ApJ, 479, 665 Sarajedini, A., Chaboyer, B., & Demarque, P. 1997, PASP, 109, 1321 Trager, S.C. 1998, PhD thesis, Santa Cruz, CA Worthey, G. 1994, ApJS, 95, 107 (and electronic updates) ## Discussion B.Miller: How does Washington photometry compare to other bands for determining metallicities of old globular clusters? M.Kissler-Patig: Washington photometry uses the C (somewhere between U and B) and T<sub>1</sub> ($``$ R) bands to derive metallicities. It is more sensitive to metallicity than B$``$I, the most sensitive combination of Johnson-Cousins bands, but it is less sensitive than V$``$K. However, it has the advantage over the latter that it can be obtained with a single instrument. J.C.Mermillod: What is the population producing the H$`\beta `$ line, turn-off dwarfs, blue horizontal-branch stars, or blue stragglers? M.Kissler-Patig: All of the above. H$`\beta `$ is located at $``$ 4800Å. Ideally, you would like it to be dominated by turn-off dwarfs, but especially for metal-poor populations, the horizontal branch is a significant contributor. I don’t think that blue stragglers contribute significantly unless you have a very large population of these, but their represent a uncertainty factor also. J.Gallagher: I am also concerned about the ability to separate age and metallicity well enough to be able to say with certainty that the blue and red clusters are coeval. My problem is that one can get very similar distributions of stars on a theoretical CMD at different ages – including luminosities – which could lead to increased uncertainties in relative ages of cluster groups. M.Kissler-Patig: Indeed, the problem you state is a concern. However, I think that several points save you from making too large errors. The first one is, that we are “averaging” over hundreds of clusters (i.e. CMDs) and are not very sensitive to peculiar effects in the one or the other clusters (such as weird horizontal-branches etc…). Second we are starting to use a wide range of colors, probing very different regions of the CMD (e.g. the K band will hardly be sensitive the turn-over region or the horizontal-branch morphology, while the B band will). We should therefore see unexpected difference between the bands when compared to the models, if the CMDs differ a lot from the model predictions. This whole business reduces then to model uncertainties that are present to some extend as mentioned, but do not disqualify the method as such, nor the results that the populations are coeval within a few Gyr.
no-problem/9912/hep-lat9912028.html
ar5iv
text
# 1 Introduction. ## 1 Introduction. The strings which are responsible for confinement in the infrared region can survive in the ultraviolet region as well and be responsible for non-perturbative effects at small distances. The simplest manifestation of the short strings in QCD would be a stringy piece in the heavy quark–antiquark potential at small distances: $$V(r)=\alpha /r+\sigma _0r,r0$$ (1) This linear piece could be related to divergences of the perturbative series in large orders revealed by the so called ultraviolet renormalon (see and references therein). In this sense, one can speculate that the short strings is a non-perturbative counterpart of the ultraviolet renormalon. It is of course far from being trivial to find the non-perturbative potential at short distances in QCD. Thus, we are invited to consider simpler model with confinement. So far, only the Abelian Higgs model has been analyzed and the stringy potential at short distances was indeed found . The physics behind the stringy potential is highly non-trivial and can be viewed as a manifestation of the Dirac strings. It is worth mentioning that the physical manifestations of the Dirac strings were found first in the example of the compact photodynamics . In Sect. 2 we review the results obtained in the case of the Abelian Higgs model and comment on the connections with the compact $`U(1)`$. In Sect. 3 we consider the potential at short distances within another $`U(1)`$ model, namely the compact 3D electrodynamics. As is well known, it exhibits confinement of the electric charges , i.e. the linear potential at large distances. We do find a non-analytic behavior of the potential at short distances. However, as is argued in Sect. 4, the new non-analytical terms may disappear once the distance $`r`$ is much smaller than the size of monopoles present in the model. All the consideration here is on the classical level. In case of QCD the use of the lattice regularization assumes that the Dirac strings are allowed and carry no action. From this point of view the situation is a reminiscent of the Abelian models mentioned above. However, unlike the abelian case the monopoles associated with the end points of the Dirac string may have zero action. Thus, both classical field configurations and the quantum running of the effective coupling seem to be equally important in the QCD case. As a result, there is no definite prediction for the short distance behavior of the potential at the moment. We concentrate therefore, on phenomenological manifestations of the hypothetical short strings. A comparison with existing data indicates that the novel effects corresponding to the short strings are indeed present. Naturally enough, the data refer to a limited range of distances. Thus, the statement above refers to distances of order $`(0.5÷1.0)\text{GeV}^1`$. The QCD phenomenology is reviewed in Sects. 5,6 while in Sect. 7 conclusions are given. ## 2 Short Strings in the Dual Abelian Higgs Model. The first example of drastic non-perturbative effects in ultraviolet was in fact given in paper . The Lagrangian considered is that of free photons: $$L=\frac{1}{4e^2}F_{\mu \nu }^2$$ (2) where $`F_{\mu \nu }`$ is the field strength tensor of the electromagnetic field. Although the theory looks absolutely trivial, it is not the case if one admits the Dirac strings. Naively, the energy associated with the Dirac strings is infinite: $$E_{\text{Dirac string}}=\frac{1}{8\pi }d^3r𝐇^2lA\left(\frac{\text{magnetic flux}}{A}\right)^2\mathrm{}$$ (3) where $`l,A`$ are the length and area of the string, respectively. Since the magnetic flux carried by the string is quantized and finite the energy diverges quadratically in the ultraviolet, i.e. in the limit $`A0`$. However within the lattice regularization the action of the string is in fact zero because of the compactness of the $`U(1)`$. The invisible Dirac strings may end up with monopoles which have a non-zero action. Moreover, the monopole action is linearly divergent in ultraviolet. However the balance between the suppression due to this action and enhancement due to the entropy factor favors a phase transition to the monopole condensation at $`e^21`$. As a result the test electric charges are subject to linear potential at all the distances if $`e^2`$ is large enough. Thus, in compact $`U(1)`$ model the non-perturbative effects change the interaction at all distances, for a range of the coupling values. Next, one considers the Dual Abelian Higgs Model with the action $$S=d^4x\left\{\frac{1}{4g^2}F_{\mu \nu }^2+\frac{1}{2}|(iA)\mathrm{\Phi }|^2+\frac{1}{4}\lambda (|\mathrm{\Phi }|^2\eta ^2)^2\right\},$$ (4) here $`g`$ is the magnetic charge, $`F_{\mu \nu }_\mu A_\nu _\nu A_\mu `$. The gauge boson and the Higgs are massive, $`m_V^2=g^2\eta ^2,m_H^2=2\lambda \eta ^2`$. There is a well known Abrikosov-Nielsen-Olesen (ANO) solution to the corresponding equations of motion. The dual ANO string may end up with electric charges. As a result, the potential for a test charge-anticharge pair grows linearly at large distances: $$V(r)=\sigma _{\mathrm{}}r,r\mathrm{}.$$ (5) Note that there is a Dirac string resting along the axis of the ANO string connecting monopoles and its energy is still normalized to zero. An amusing effect occurs if one goes to distances much smaller than the characteristic mass scales $`m_{V,H}^1`$. Then the ANO string is peeled off and one deals with a naked (dual) Dirac string. The manifestation of the string is that the Higgs field has to vanish along a line connecting the external charges. Otherwise, the energy of the Dirac string would jump to infinity anew. As a result of the boundary condition that $`\mathrm{\Phi }`$ vanishes on a line connecting the charges, the potential contains a stringy piece (1) at short distances . The string tension $`\sigma _0`$ smoothly depends on the ratio $`m_H/m_V`$. In particular, in the Bogomolny limit ($`m_H=m_V`$) the string tension $$\sigma _0\sigma _{\mathrm{}},$$ (6) i.e. the effective string tension is the same at all distances. ## 3 Short Strings in 3D Compact Electrodynamics. As it is well known in 3D compact electrodynamics the charge–anticharge potential is linear at large separations. Below we consider the string tension $`\sigma _0`$ at small distances and show that it has a non-analytical piece associated with small distances. As usual, it is convenient to perform the duality transformation, and work with the corresponding Sine-Gordon theory. The expectation value of the Wilson loop in dual variables is: $$W=\frac{1}{𝒵}𝒟\chi e^{S(\chi ,\eta _𝒞)},$$ (7) where $$S(\chi ,\eta _𝒞)=\left(\frac{e}{2\pi }\right)^2d^3x\left\{\frac{1}{2}(\stackrel{}{}\chi )^2+m_D^2(1\mathrm{cos}[\chi \eta _𝒞])\right\},$$ (8) $`m_D`$ is the Debye mass and $`S(\chi ,0)`$ is the action of the model. If static charge and anticharge are placed at the points $`(R/2,0)`$ and $`(R/2,0)`$ in the $`x_1,x_2`$ plane ($`x_3`$ is the time axis), then $$\eta _𝒞=arctg[\frac{x_2}{x_1R/2}]arctg[\frac{x_2}{x_1+R/2}],\pi \eta _𝒞\pi .$$ (9) Below we present the results of the numerical calculations of the string tension, $`\sigma =E/(m_DR),`$ (10) $`E={\displaystyle d^2x\left\{\frac{1}{2}(\stackrel{}{}\chi )^2+m_D^2(1\mathrm{cos}[\chi \eta _𝒞])\right\}}.`$ (11) Note that the energy $`E`$ is measured now in units of the dimensional factor $`(\frac{e}{2\pi })^2`$ (cf. (8)). Variation of functional (11) leads to the equation of motion $`\mathrm{\Delta }\chi =m_D^2\mathrm{sin}[\chi \eta _𝒞]`$. For finite $`R`$ we can solve this equation numerically. The energy $`E`$ versus $`m_DR`$ is shown on Fig.1(a). At large separations between the charges ($`m_DR1`$) it tends to the asymptotic linear behavior $`E=8m_DR`$ which can be obtained also analytically ). At small distances there is a contributions of Yukawa-type to the energy (11), which should be extracted explicitly. Note that in course of rewriting original $`3D`$ compact electrodynamics in the form (7-8) the Coulomb potential was already subtracted, so that (11) contains Yukawa-like piece without singularity at $`R=0`$. It is not difficult to find the corresponding coefficient: $$E=E^{string}2\pi (K_0[m_DR]+\mathrm{ln}[m_DR])$$ (12) where $`K_0(x)`$ is the modified Bessel function and $`E^{string}`$ is the energy of the charge–anticharge pair. The corresponding string tension $$\sigma ^{string}=\sigma +2\pi (K_1[m_DR]+\frac{1}{m_DR})$$ (13) is shown on Fig.1(b). We found that the best fit of numerical data for small values of $`m_DR`$ is by the function $`\sigma ^{string}=const(m_DR)^\nu `$ which gives $`\nu 0.6`$. Thus the non-analytical potential associated with small distances is softer than in the case of the Abelian Higgs model. The source of the non-analyticity is the behavior of the function $`\eta _𝒞(x_1,x_2)`$ (9) which is singular along the line connecting the charges, see Fig.2(a). ## 4 Georgi-Glashow model The compact electrodynamics is usually considered as the limit of Georgi–Glashow model, when the radius of the ’t Hooft – Polyakov monopole tends to zero. For a non-vanishing monopole size the problem of evaluating the potential at small distances becomes rather complicated. To avoid unnecessary further complications we consider the 3D Georgi–Glashow model in the BPS limit. The ’t Hooft Polyakov monopole corresponds then to the fields: $`\mathrm{\Phi }^a`$ $`=`$ $`{\displaystyle \frac{x^a}{r}}\left({\displaystyle \frac{1}{\mathrm{tanh}(\mu r)}}{\displaystyle \frac{1}{\mu r}}\right),`$ (14) $`A_i^a`$ $`=`$ $`\epsilon ^{aic}{\displaystyle \frac{x^a}{r}}\left({\displaystyle \frac{1}{r}}{\displaystyle \frac{\mu }{\mathrm{sinh}(\mu r)}}\right),A_0^a=0.`$ (15) The contribution of this monopole to the full non-Abelian Wilson loop $`W`$ can be calculated analytically. If the static charges are placed at points $`\pm \stackrel{}{R}/2`$ in the $`x_1,x_2`$ plane the result is: $$W(\stackrel{}{b}_1,\stackrel{}{b}_2,\mu )=\mathrm{cos}h(\mu b_1)\mathrm{cos}h(\mu b_2)+\frac{(\stackrel{}{b}_1\stackrel{}{b}_2)}{b_1b_2}\mathrm{sin}h(\mu b_1)\mathrm{sin}h(\mu b_2),$$ (16) here $`\stackrel{}{b}_{1,2}=\stackrel{}{x}_0\pm \stackrel{}{R}/2`$, $`b_k=|\stackrel{}{b}_k|`$, $`\stackrel{}{x}_0`$ is the center of the ’t Hooft – Polyakov monopole and $$h(x)=\frac{\pi }{2}\frac{x}{2}_{\mathrm{}}^+\mathrm{}\frac{\mathrm{d}\zeta }{\sqrt{x^2+\zeta ^2}\mathrm{sinh}\sqrt{x^2+\zeta ^2}}.$$ (17) One way to represent (16) in terms of the function $`\eta _𝒞`$ introduced earlier is: $`\eta _𝒞(x_0,R,\mu )=\mathrm{sign}(y)\mathrm{arccos}W(\stackrel{}{b}_1,\stackrel{}{b}_2,\mu ).`$ (18) In the limit $`R\mu \mathrm{}`$ $`W(\stackrel{}{b}_1,\stackrel{}{b}_2,\mu )\mathrm{cos}\eta _𝒞`$ and $`\eta _𝒞(x_0,R,\mu )`$ coincides with the definition (9). For small $`R\mu `$, $`\eta _𝒞`$ is singular not only between external charges, but also outside this region (see Fig.2(b)) although the strength of singularity gets smaller. In the limit of vanishing $`\eta _𝒞`$ $`\sigma _0`$ should vanish. To summarize, it is natural to expect that at distances much smaller than the monopole size, the non-analytical piece in the potential associated with small distances disappears. However, for a consistent treatment of the problem one should take into account the modification of the Coulomb-like monopole interaction due to the finite size of the BPS monopoles. ## 5 Topological defects and short-distance potential in QCD. Knowing the physics of the Abelian models above it is easy to argue that the perturbative vacuum of QCD is not stable in fact. Indeed, let us make the lattice more coarser a la Wilson until the effective coupling of QCD would reach the value where the phase transition in the compact $`U(1)`$ occurs. Then the QCD perturbative vacuum is unstable against the monopole formation. The actual non-perturbative vacuum can of course be very different but it cannot remain perturbative. Similar remark with respect to formation of $`Z_2`$ vortices was in fact made long time ago . Thus, it is natural to expect that singular non-perturbative defects play a role in QCD as well. In case of the abelian projection these are Dirac strings with monopoles at the ends, while in case of the $`Z_2`$ vortices the corresponding infinitely thin objects can be identified with the so called P-vortices, see and references therein. The existence of the infinitely thin topological defects in QCD makes it close akin of the Abelian models considered above. However, the non-Abelian nature of the interaction brings in an important difference as well. Namely, the topological defects in QCD are marked rather by singular potentials than by a large non-Abelian action. Consider first the Dirac string. Introduce to this end a potential which is a pure gauge: $$A_\mu =\mathrm{\Omega }^1_\mu \mathrm{\Omega }$$ (19) and choose the matrix $`\mathrm{\Omega }`$ in the form: $$\mathrm{\Omega }(x)=\left(\begin{array}{cc}\mathrm{cos}\frac{\gamma }{2}& \mathrm{sin}\frac{\gamma }{2}e^{i\alpha }\\ \mathrm{sin}\frac{\gamma }{2}e^{i\alpha }& \mathrm{cos}\frac{\gamma }{2}\end{array}\right)$$ (20) where $`\alpha `$ and $`\gamma `$ are azimuthal and polar angles, respectively. Then it is straightforward to check that we generated a Dirac string directed along the $`x_3`$-axis ending at $`x_3=0`$ and carrying the color index $`a=3`$. It is quite obvious that such Abelian-like strings are allowed by the lattice regularization of the theory. The crucial point, however, is that the non-Abelian action associated with the potential (19) is identical zero. On the other hand, in its Abelian components the potential looks as a Dirac monopole, which are known to play important role in the Abelian projection of QCD (for a review see, e.g., ). Thus, there is a kind of mismatch between short- and large-distance pictures. Namely, if one considers the lattice size $`a0`$, then the corresponding coupling $`g(a)0`$ and the solution with a zero action (19) is strongly favored at short distances. At larger distances we are aware of the dominance of the Abelian monopoles which have a non-zero nonabelian action . The end-points of a Dirac string still mark centers of the Abelian monopole. Thus, monopoles can be defined as point-like objects topologically in terms of singular potentials, not action. Similar logic holds in case of the so called P-vortices as well. To detect the P-vortices one uses the gauge maximizing the sum $$\underset{l}{}|TrU_l|^2$$ (21) where $`l`$ runs over all the links on the lattice. The center projection is obtained by replacing $$U_lsign(TrU_l).$$ (22) Each plaquette is marked either as $`(+1)`$ or $`(1)`$ depending on the product of the signs assigned to the corresponding links. P-vortex then pierces a plaquette with (-1). Moreover, the fraction $`p`$ of the total number of plaquettes pierced by the P-vortices and of the total number of all the plaquettes $`N_T`$, obeys the scaling law $$p=\frac{N_{vor}}{N_T}f(\beta )$$ (23) where the function $`f(\beta )`$ is such that $`p`$ scales like the string tension. Assuming independence of the piercing for each plaquette one has then for the center-projected Wilson loop $`W_{cp}`$: $$W_{cp}=[(1p)(+1)+p(1)]^Ae^{2pA}$$ (24) where $`A`$ is the number of plaquettes in the area stretched on the Wilson loop. Numerically, Eq. (24) reproduces the full string tension. It is quite obvious that the P-vortices defined this way correspond in the continuum limit to singular gauge potentials $`A_\mu ^a`$ (see, e.g., . Indeed, the link matrices with the negative trace correspond in the limit of the vanishing lattice spacing, $`a0`$ to the gauge potentials $`A_l^a`$ of the order $`\frac{1}{a}`$. Thus P-vortices correspond to large gauge potentials. The potentials should mostly cancel, however, if the corresponding field-strength tensors are calculated because of the asymptotic freedom. The logic is essentially the same as outlined above for the Dirac string, see, e.g. and references therein. At the moment, it is difficult to say a priori whether the topological defects defined in terms of singular potentials can be considered as infinitely thin from the physical point of view. They might well be gauge artifacts. Phenomenologically, using the topologically defined point-like monopoles or infinitely thin P-vortices one can measure non-perturbative $`\overline{Q}Q`$ potential at all the distances. It is remarkable therefore that the potentials generated both by monopoles and P-vortices turn to be linear at all the distances: $$V_{nonpert}(r)\sigma _{\mathrm{}}r\text{at all}r$$ (25) Note that the Coulomb-like part is totally subtracted out through the use of the topological defects. Moreover, no-change in the slope agrees well with the predictions of the dual Abelian Higgs model (see Sect. 2). The numerical observation (25) is highly nontrivial. If it were only the non-Abelian action that counts, then the non-perturbative fluctuations labeled by the Dirac strings or by P-vortices are bulky (see discussion above) and the corresponding $`\overline{Q}Q`$ potentials (25) should have been quadratic at short distances $`r`$. This happens, for example, in the model with finite thickness of $`Z_2`$ vortices. Also, if the lessons from the Georgi–Glashow model apply (see Sect 4 above) the finite size of the monopoles would spoil linearity of the potential at short distances. To summarize, direct measurements of the non-perturbative $`\overline{Q}Q`$ potential indicate the presence of a stringy potential at short distances. The measurements go down to distances of order $`(2GeV)^1`$. ## 6 QCD phenomenology. In view of the results of measurements of the non-perturbative potential it is interesting to reexamine the power corrections with the question in mind, whether there is room for novel corrections associated with the short strings. From the dimensional considerations alone it is clear that the new corrections are of order $`\sigma _0/Q^2`$ where $`Q`$ is a large generic mass parameter characteristic for problem in hand. Also, the ultraviolet renormalons in 4D indicate the same kind of correction. Unlike the case of the non-perturbative potential discussed above, other determinations of the power corrections ask for a subtraction of the dominating perturbative part. Which might make the results less definitive. Here we briefly overview the relevant results. (i) The first claim of the existence of non-standard $`1/Q^2`$ corrections was made in ref. . Namely, it was found that the expectation value of the plaquette minus perturbation theory contribution shows $`1/Q^2`$ behavior. On the other hand, the standard Operator Product expansion results in a $`1/Q^4`$ correction. (ii) The lattice simulation do not show any change in the slope of the full $`Q\overline{Q}`$ potential as the distances are changed from the largest to the smallest ones where Coulombic part becomes dominant. It is known from phenomenological analysis and from the calculations on the lattice that the realistic QCD corresponds to the dual Abelian Higgs model with $`m_Hm_V`$. As is mentioned in Sect.2, the AHM in the classical approximation also gives $`\sigma _{\mathrm{}}\sigma _0`$ at $`m_H=m_V`$. (iii) The explicit subtraction of the perturbative corrections at small distances from $`Q\overline{Q}`$ potential in lattice gluodynamics was performed in ref.. This procedure gives $`\sigma _05\sigma _{\mathrm{}}`$ at very small distances. (iv) There exist lattice measurements of fine splitting of the $`Q\overline{Q}`$ levels as function of the heavy quark mass. The Voloshin-Leutwyler picture predicts a particular pattern of the heavy mass dependence of this splitting. Moreover, these predictions are very different from the predictions based on adding a linear part to the Coulomb potential (Buchmuller-Tye potential ). The results of recent calculations of this type favor the linear correction to the potential at short distances. (v) Analytical studies of the Bethe-Salpeter equation and comparison of the results with the charmonium spectrum data favor a non-vanishing linear correction to the potential at short distances . (vi) The lattice-measured instanton density as a function of the instanton size $`\rho `$ does not satisfy the standard OPE predictions that the leading correction is of order $`\rho ^4`$. Instead, the leading corrections is in fact quadratic . (vii) One of the most interesting manifestation of short strings might be $`1/Q^2`$ corrections to the standard OPE for current–current correlation function $`\mathrm{\Pi }(Q^2)`$. It is impossible to calculate the coefficient of $`1/Q^2`$ corrections from first principles, and in ref. it was suggested to simulate this correction by a tachyonic gluon mass. The Yukawa potential with an imaginary mass has the linear attractive piece at small distances, i.e. reproduces short strings. The use of the gluon propagator with the imaginary gluon mass ($`m_g^2=0.5\text{ Gev}^2`$) explains unexpectedly well the behavior of $`\mathrm{\Pi }(Q^2)`$ in various channels. To check the model with a tachyonic short-distance mass further, it would be very important to perform the accurate calculations of various correlators $`\mathrm{\Pi }(Q^2)`$ on the lattice. There are also alternative theoretical schemes in QCD which predict non-conventional $`1/Q^2`$. ## 7 Concluding Remarks As is revealed by analysis of the data on the power corrections, the existence of the novel quadratic corrections is strongly supported by the data. There are, however, two caveats to the statement that the novel short-distance power corrections have been detected. On the theoretical side, the existence of short strings has been proven in only within the Abelian Higgs model. As for the QCD itself, the analysis is so far inconclusive. On the experimental side, the data always refer to a limited range of distances. In particular linear non-perturbative potential has been observed at distances of order of one lattice spacing which is in physical units is $`(1÷2GeV)^1`$. One can argue that at shorter distances the behavior of the non-perturbative power corrections changes (see, e.g., ). Which would be a remarkable phenomenon by itself. ## Acknowledgments M.N.Ch. and M.I.P. acknowledge the kind hospitality of the staff of the Max-Planck Institut für Physik (München), where the part of this work has been done. Work of M.N.C., F.V.G. and M.I.P. was partially supported by grants RFBR 99-01230a and INTAS 96-370.
no-problem/9912/cond-mat9912433.html
ar5iv
text
# 𝑠- and 𝑑-wave solution of Eliashberg equations with finite bandwidth ## Abstract In this work, we discuss the results of the direct solution of the Eliashberg equations with finite bandwidth, in the cases of $`s`$\- and $`d`$-wave symmetry for the pair wave function and in the presence of scattering from impurities. We show that the reduction of the critical temperature $`T_\mathrm{c}`$ due to the finite bandwidth depends on the value of the bandwidth itself, but is almost independent of the symmetry of the order parameter. The same happens for the shape of $`Z(\omega )`$ and $`\mathrm{\Delta }(\omega )`$. Moreover, we discuss the effect of the finite bandwidth on the shape of the quasiparticle density of states. The results clearly indicate that the infinite bandwidth approximation leads to an underestimation of the electron-boson coupling constant. High-$`T_\mathrm{c}`$ cuprates and fullerenes are characterized by phononic energies ($`\mathrm{\Omega }_{\mathrm{phon}}`$) comparable with the electronic ones ($`E_\mathrm{F}`$) while in low-$`T_\mathrm{c}`$ superconductors it is always $`E_\mathrm{F}\mathrm{\Omega }_{\mathrm{phon}}`$. This last condition leads to the standard Eliashberg equations obtained within the limit $`E_\mathrm{F}+\mathrm{}`$. In this paper, we study the effect of a finite bandwidth on some relevant physical quantities in the framework of the Eliashberg theory. For simplicity, we disregard the important related problem of the breakdown of the Migdal’s theorem, and put to zero the Coulomb pseudopotential $`\mu ^{}`$ . The kernels of the Eliashberg equations (EE) for the renormalization function $`Z(\omega ,𝐤)`$ and the order parameter $`\mathrm{\Delta }(\omega ,𝐤)`$ contain the retarded electron-boson interaction $`\alpha ^2(\mathrm{\Omega },𝐤,𝐤^{})F(\mathrm{\Omega })`$. Referring to high-$`T_\mathrm{c}`$ cuprates, we assume $`𝐤`$ and $`𝐤^{}`$ to lie in the $`ab`$ plane (CuO<sub>2</sub> plane) and call $`\varphi `$ and $`\varphi ^{}`$ their azimuthal angles in this plane . We integrate over the effective band normal to the Fermi surface from $`W`$ to $`+W`$, and expand $`\alpha ^2(\mathrm{\Omega },\varphi ,\varphi ^{})F(\mathrm{\Omega })`$ in terms of basis functions $`\psi _0\left(\varphi \right)=1`$ and $`\psi _1\left(\varphi \right)=\sqrt{2}\mathrm{cos}\left(2\varphi \right)`$ in the following way: $`\alpha ^2(\mathrm{\Omega },\varphi ,\varphi ^{})F(\mathrm{\Omega })`$ $`=`$ $`\alpha _{00}^2F(\mathrm{\Omega })\psi _0\left(\varphi \right)\psi _0\left(\varphi ^{}\right)`$ $`+\alpha _{11}^2F(\mathrm{\Omega })\psi _1\left(\varphi \right)\psi _1\left(\varphi ^{}\right).`$ We then search for a $`s+\mathrm{i}d`$ solution with $`\mathrm{\Delta }_n(\varphi )`$ $``$ $`\mathrm{\Delta }(\mathrm{i}\omega _n,\varphi )=\mathrm{\Delta }_s(\mathrm{i}\omega _n)+\mathrm{\Delta }_d(\mathrm{i}\omega _n)\psi _1\left(\varphi \right)`$ (2) $`Z_n(\varphi )`$ $``$ $`Z(\mathrm{i}\omega _n,\varphi )=Z_s(\mathrm{i}\omega _n)+Z_d(\mathrm{i}\omega _n)\psi _1\left(\varphi \right).`$ In this case, and in the Matsubara representation, the Eliashberg equations in the presence of impurities become : $$\omega _nZ_n(\varphi )=\omega _n+\frac{k_BT}{\pi }\underset{m=\mathrm{}}{\overset{+\mathrm{}}{}}_0^{2\pi }d\varphi ^{}$$ $$\left[\lambda _{m,n}(\varphi ,\varphi ^{})N_m(\varphi ^{})\theta _m(\varphi ^{})+\frac{N_n(\varphi ^{})\theta _n(\varphi ^{})}{k_BT\tau }\right]$$ $$\mathrm{\Delta }_n(\varphi )Z_n(\varphi )=\frac{k_BT}{\pi }\underset{m=\mathrm{}}{\overset{+\mathrm{}}{}}_0^{2\pi }d\varphi ^{}$$ $$\left[\lambda _{m,n}(\varphi ,\varphi ^{})P_m(\varphi ^{})\theta _m(\varphi ^{})+\frac{P_n(\varphi ^{})\theta _n(\varphi ^{})}{k_BT\tau }\right]$$ $$\theta _n(\varphi )=\mathrm{tan}^1\left[W/2\sqrt{\omega _n^2Z_n^2(\varphi )+\mathrm{\Delta }_n^2(\varphi )Z_n^2(\varphi )}\right]$$ $$\lambda _{m,n}(\varphi ,\varphi ^{})=2_0^+\mathrm{}d\mathrm{\Omega }\frac{\mathrm{\Omega }\alpha ^2(\mathrm{\Omega },\varphi ,\varphi ^{})F(\mathrm{\Omega })}{\mathrm{\Omega }^2+(\omega _n\omega _m)^2}$$ where $`\omega _n=\pi (2n+1)k_BT`$, and $`\tau ^1`$ is the impurity scattering rate. Furthermore we know that: $`P_n(\varphi )`$ $`=`$ $`\mathrm{\Delta }_n(\varphi )Z_n(\varphi )/\sqrt{\omega _n^2Z_n^2(\varphi )+\mathrm{\Delta }_n^2(\varphi )Z_n^2(\varphi )}`$ (3) $`N_n(\varphi )`$ $`=`$ $`\omega _nZ_n(\varphi )/\sqrt{\omega _n^2Z_n^2(\varphi )+\mathrm{\Delta }_n^2(\varphi )Z_n^2(\varphi )}.`$ Thus we have four equations for $`Z_s(\mathrm{i}\omega _n)`$, $`Z_d(\mathrm{i}\omega _n)`$, $`\mathrm{\Delta }_s(\mathrm{i}\omega _n)`$ and $`\mathrm{\Delta }_d(\mathrm{i}\omega _n)`$. Here we only consider the case $`Z_d(\mathrm{i}\omega _n)0`$ . In our numerical analysis we put, for simplicity, $`\alpha _{11}^2F(\mathrm{\Omega })`$=$`g_d\alpha _{00}^2F(\mathrm{\Omega })`$ where $`g_d`$ is a constant and, as a consequence, $`\lambda _d=g_d\lambda _s`$ . We used for $`\alpha _{00}^2F(\mathrm{\Omega })`$ the spectral function we experimentally determined in Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8</sub> (BSCCO) appropriately scaled to give, in both the $`s`$ and $`d`$ cases, $`T_\mathrm{c}=97`$ K in the $`W=\mathrm{}`$ limit. The coupling constants result to be $`\lambda _s=3.15`$ for the $`s`$-wave case, $`\lambda _s=2`$ and $`\lambda _d=2.3`$ for the $`d`$-wave one. From the direct numerical solution of the EE we found that the symmetry of $`\mathrm{\Delta }_n(\varphi )`$ depends on the values of the coupling constants $`\lambda _s`$ and $`\lambda _d`$ and, for particular values of $`\lambda _s`$ and $`\lambda _d`$, on the starting values of $`\mathrm{\Delta }_s`$ and $`\mathrm{\Delta }_d`$. As shown in Fig. 1 (a) and (b), the dependence of $`T_\mathrm{c}`$ and $`\mathrm{\Delta }(\mathrm{i}\omega _{n=0})`$ at $`T`$=4 K on the bandwidth $`W`$ is almost the same in both the $`s`$\- and $`d`$-wave symmetries. The reduction of the bandwidth results in a sensible reduction of $`T_\mathrm{c}`$ and, on a minor extent, of $`\mathrm{\Delta }(\mathrm{i}\omega _{n=0})`$. In Fig. 2 we show the $`d`$-wave normalized density of states, calculated through the analytical continuation with Padé approximants of the imaginary-axis solution, for various values of $`W`$ at $`T`$=2 K . As in the $`s`$-wave case (not shown), the effect of a small bandwidth is remarkable particularly at high energies. These curves are in *better agreement* with tunneling experimental data than those obtained for an infinite bandwidth, particularly for the presence of a *dip* above the peak (at about $`2\mathrm{\Delta }`$) and because they asymptotically tend to 1 from above. We have also calculated the real and imaginary parts of the functions $`\mathrm{\Delta }(\omega )`$ and $`Z(\omega )`$ in the $`s`$\- and $`d`$-wave cases. For small values of $`W`$, these functions are markedly modified with respect to the infinite bandwidth case, independently of the symmetry. In the inset of Fig. 1(a) we show (only in the $`s`$-wave case) the values of $`\lambda _{\mathrm{eff}}`$ necessary to obtain $`T_\mathrm{c}=97`$ K, for various values of $`W`$. We conclude that the use of the standard Eliahsberg equations when $`E_\mathrm{F}\mathrm{\Omega }_{\mathrm{phon}}`$ leads to *underestimate* the real value of the electron-boson coupling constant.
no-problem/9912/cond-mat9912342.html
ar5iv
text
# Quantum Monte Carlo Study of electrons in low dimensions ## 1. INTRODUCTION Electrons in low dimensions, as found in modern semiconductor devices, are greatly affected by correlations effects that may dramatically change their behavior and bring about new phenomena and phase transitions. However, at zero magnetic field and in simple systems, such as layers or wires, very low densities are necessary for correlations to play an important role. The situation is somewhat better in coupled systems. The additional correlations due to the interlayer (or interwire) interactions may help in pushing transitions to larger and more easily accessible densities, yielding richer phase diagrams and possibly new phenomena, such as superconductivity or excitonic condensation. In all cases, the quantum Monte Carlo technique provides an effective tool, which allows the determination of the static properties of these systems with unprecedented accuracy. While in the invited paper delivered at the St-Malo by one of us (GS) results were presented for both an electron-hole bilayer and a model quantum wire, due to lack of space here we shall restrict to the latter system. A detailed account of the electron–hole bilayer simulations will be given elsewhere. Perhaps, the theoretical interest for one-dimensional (1D) models is due in part to their inherent simplicity, which often results in exact solutions. In fact, the problem of interacting Fermions simplifies in one dimension and one can show that the familiar concept of Fermi liquids has to be abandoned in favor of that of Tomonaga-Luttinger liquids. The interest in 1D models has grown even bigger in recent years, thanks to the advances in fabrication techniques and the realization of the so called quantum wires, i.e., quasi-one-dimensional electron systems. Thus, the investigation of model 1D electron gases with numerical simulations, which yield results of high accuracy if not exact, is particularly appealing—both in relation to experiments and to other theoretical approaches. ## 2. THE MODEL In a quantum wire the electronic motion is confined in two directions (say y and z) and free in the third one, x. In the simplest approximation, one assumes that the energy spacing of the one-particle orbitals for the transverse motion is sufficiently large, so that only the orbital lowest in energy, say $`\varphi (y,z)`$, needs to be considered. Hence the total wavefunction of the many-electron system will factorize in a irreducible many body term for the x motion, $`\mathrm{\Psi }(x_1,\mathrm{},x_n)`$, times a product of $`\varphi `$’s, one per particle. Tracing out the transverse (y,z) motion from the full Schrödinger equation yields an effective 1D problem with an effective 1D interparticle potential. Evidently, different models of confinement yield different effective potentials. One of the firt models assumes a harmonic confinement in the transverse plane. More recently a hard wall confinement has been investigated, with the electrons moving freely in a cylinder of given radius. One may also start from the 2D electron gas and apply a confining potential in one direction; again, both harmonic and hard wall confinements have been considered. Here, we choose the model of Ref. , with a harmonic confining potential $`U_c(𝐫)=(\mathrm{}^2/8m^{}b^4)(y^2+z^2)`$ and a coulombic electron-electron interaction $`e^2/ϵr`$. The resulting effective 1D potential is readily shown to be $`v(x)=(e^2/ϵ)(\sqrt{\pi }/2b)\mathrm{exp}[(x/2b)^2]\mathrm{erfc}[|x|/2b]`$, with Fourier transform $`v(q)=(e^2/ϵ)E_1[(bq)^2]\mathrm{exp}[(bq)^2].`$ Above $`m^{}`$ and $`ϵ`$ are, respectively, the effective mass of the carriers and the dielectric constant of the medium in which the carriers move, and $`b`$ measures the wire width. One can easily check that $`v(x)`$ is long ranged, with a Coulomb tail $`e^2/ϵ|x|`$, and is finite at the origin, $`v(0)=(e^2/ϵb)(\sqrt{\pi }/2)`$. The 1D system is made neutral by introducing a background that cancels the $`q=0`$ component of the pair interaction.. Earlier investigations of this model have employed the so-called STLS approximation, either in its original version or in its sum rule approach. Both the paramagnetic and the ferromagnetic phases have been studied and the occurrence of a Bloch instability \[transition from the paramagnetic to ferromagnetic state\] has been predicted. This, according to the authors, could explain the anomalous plateau which has been observed in the conductance of GaAs quantum wires, in the limit of single channel occupancy. ## 3. DMC RESULTS To study our model quantum wire we resort here to fixed-node diffusion Monte Carlo (DMC) simulations. As the exact nodes are known in 1D, for a given number of particles we obtain exact estimates of the energy, within the statistical error—the small systematic time step error involved with the imaginary time integration was in fact extrapolated out. The estimates of other properties, such as static correlation functions and momentum distributions, remain approximate though very accurate. Below, we present some of our results for the energy and the structure, skipping completely the technical details of our calculations which can be found elsewhere. We have performed simulations for three different values of the wire width, $`b/a_B^{}=4,\mathrm{\hspace{0.17em}1},\mathrm{\hspace{0.17em}0.1}`$, at selected values of the coupling parameter $`r_s`$, defined in 1D by $`\rho =N/L1/2r_sa_B^{}`$. Here, $`a_B^{}=\mathrm{}^2ϵ/m^{}e^2`$ is the effective Bohr radius of the material. We should remark that for the model at hand the coupling strength, defined as the ratio between the potential energy of a pair of particles at the mean distance $`r_sa_B^{}`$ and the Fermi energy, is proportional to $`r_s\times [r_sa_B^{}v(r_sa_B^{})]=r_s\times f(r_sa_B^{}/b)`$, with $`f(x)`$ a growing function of $`x`$. Thus, at fixed $`b`$, the coupling actually increases more than linearly with $`r_s`$, whereas at given $`r_s`$ the coupling increases with decreasing $`b`$—reflecting the obvious fact that a narrower wire enhances the effect of the Coulomb repulsion. ### 3.1 The Energy Our DMC ground state energies for wires with $`b=4a_B^{}`$ and $`b=a_B^{}`$ are shown in Fig. 1, together with the results of the STLS scheme, which is easily solved numerically. We should note here that the alternative sum rule approach to STLS yields results somewhat different from the present ones, in terms of correlation energy. However, as the correlation energy is a small fraction of the ground state energy, such differences can be neglected here and the curves in Fig. 1 can also be taken as representative of the results of Ref. . It is evident, from Fig. 1, that (i) the STLS predicts a transition from the paramagnetic to the ferromagnetic phase as the coupling strength is increased and (ii) the critical $`r_s`$ value of such a transition increases with $`r_s`$. Our exact energies, on the contrary, give the correct ordering of the two phases imposed by the Lieb–Mattis theorem: the ferromagnetic fluid is higher in energy than the paramagnetic one. In fact, the distance in energy of the two phases closes up with increasing $`r_s`$ and, at the largest values of $`r_s`$ considered here, falls within the combined error bars of the two phases. This is hardly surprising. At large coupling, the strong correlations that build up in the system keep particles well apart and thus the statistics ends up having negligible effects on the total energy. In turn, this makes the sampling of spin correlation extremely hard. An additional comment, which is naturally prompted by Fig. 1, is that at intermediate and large coupling the STLS performs much better for the ferromagnetic phase than for the paramagnetic one. This is most easily appreciated by looking at the insets in the figure. In fact, one may argue on general grounds that it is easier to describe the fully spin polarized phase, as part of the correlation is automatically built in by the symmetry constraints. ### 3.2 The Structure In a quantum wire interactions are enhanced, due to the reduced dimensionality, and strong ordering may thus arise at large coupling, even though genuine crystalline order is generally forbidden in 1D. In fact a 1D system with an interparticle potential decaying as $`1/|x|`$ is borderline, in this respect. Ordering may be characterized in terms of structure factors, which measure ground state correlations between Fourier components of one body densities. Thus, we shall focus on the number and magnetization static structure factors, respectively $`S_{nn}(k)`$ and $`S_{mm}(k)`$, which are defined by $`S_{\alpha \beta }(k)=\rho _\alpha (k)\rho _\beta (k)/N`$, with the number density $`\rho _n(k)=\rho _{}(k)+\rho _{}(k)`$ and the magnetization density $`\rho _m(k)=\rho _{}(k)\rho _{}(k)`$. The (cross) charge-spin correlations, measured by $`S_{nm}(k)`$, need not be considered since they exactly vanish in the paramagnetic fluid. The building up of a quasi-crystalline order with increasing the coupling is clearly seen in our DMC results shown in Fig. 2 for $`b=a_B^{}`$. The static structure factor $`S_{nn}(k)`$, while very close to the Hartree-Fock prediction at $`r_s=1`$, with increasing $`r_s`$ develops a pronounced peak at $`4k_F`$, which in fact may be shown to be divergent with the number of particles $`N`$, for large couplings, (see below). A pronounced peak at $`4k_F`$ corresponds in real space to slowly decaying oscillations, with period equal to the average interparticle distance $`2r_sa_B^{}`$, thus suggesting quasi-crystalline order. In the same figure we also give the predictions of approximate theories such as STLS or its dynamical version (DSTLS). The STLS only gives the lowering of $`S_{nn}(k)`$ at small and intermediate values of $`k`$, for increasing $`r_s`$, but fails completely in yielding a peak. On the contrary, the DSTLS prediction develops a peak, with increasing the coupling, though its position is off by about 20%; the height of the peak happens to almost coincide with that of the DMC result at N=22. Similar DSTLS results were recently obtained for a slightly different model of wire. We should mention that at the time of writing we were not able to obtain a solution to DSTLS for $`r_s>6`$. Recently, Schulz analyzed the properties of a yet different wire with long range interactions also behaving as $`e^2/ϵ|x|`$ at large $`x`$, resorting to a linearized dispersion of the kinetic energy and employing the bosonization technique, which gives exact results for a a Luttinger liquid. He found persistent tails in the pair correlations, both for the number and magnetization variables, implying a divergent peak in the number structure factor $`S_{nn}(k)`$ at $`4k_F`$, and a pronounced but finite peak in the magnetization structure factor $`S_{mm}(k)`$ at $`2k_F`$. In his prediction, however, real-space tails contain undetermined interaction-dependent prefactors. As we found that at $`N=22`$ our DMC and variational Monte Carlo (VMC) results for the structure compare fairly well with each other, we have employed VMC to study the $`N`$ dependence of the peaks of the structure factors, which we shown in Fig. 3. In passing, we mention that our VMC results for the paramagnetic fluid almost coincide with those obtained from a harmonic treatment of a finite linear chain. It is evident that, at variance with the results for the Luttinger liquid, we have indication of peaks diverging with $`N`$ only at large values of the coupling, but for both $`S_{nn}(k)`$ and $`S_{mm}(k)`$. In addition $`S_{mm}(2k_F)`$ appears to grow faster with $`N`$ than $`S_{nn}(4k_F)`$, again in contradiction with the results of Ref. . Possible explanation of these differences might be traced to either the undetermined interaction dependent prefactors mentioned above or to the fact that in the present study the full dispersion of the kinetic energy was retained. ### 4. CONCLUSIONS AND ACKNOWLEDGEMENTS We have presented accurate results for one-dimensional electron gases adapted to describe quantum wires of different width, focusing on the energy and the pair correlations. Our results for the energy, which are exact, do no involve surprises: they satisfy the Lieb-Mattis theorem, in contrast to approximate treatments, and rule out the occurrence of a Bloch instability. Thus, the origin of the anomalous plateau observed in the conductance of GaAs quantum wires, in the limit of single channel occupancy, should be sought elsewhere. Our results for the pair correlations, on the other hand, are intriguing. They are not exact. Yet they should be rather accurate and is natural to make comparison with the predictions for the Luttinger liquid studied by Schulz, which has a slightly different interparticle interaction but with same long range tail. However, as we have observed above, it does not appear possible to reconcile in a simple manner the predictions of the present investigation with those of Ref. . One possibility could be that the unknown interaction-dependent constants entering the tails of the pair correlations of the Luttinger liquid could in fact have a singular dependence on the coupling. At this time, we can only say that this issue deserves further investigations, both with bosonization techniques, to fully determine the coupling dependence of the tails in the pair correlations, and with numerical simulations, to estimate structure factors in an exact fashion. To this end one might resort to the recently proposed reptation Monte Carlo, which provides a simple direct way to evaluate ground state averages of local operators exactly. One of us (GS) is happy to acknowledge useful discussions with Saverio Moroni and Allan H. MacDonald. We should also thank Stefania De Palo for reading the manuscript.
no-problem/9912/astro-ph9912484.html
ar5iv
text
# Gamma-Ray Bursts: The Central Engine ## General GRB Model Requirements Given the locations afforded by x-ray and optical afterglows, redshifts have now been determined for approximately 10 GRBs so that we have at least a small sampling of GRB energies Galama99 . They are by no means standard candles. Even discounting the unusual case of GRB 980425 ($`8\times 10^{47}`$ erg), energies range from about $`5\times 10^{51}`$ erg (GRB 980613) to $`2\times 10^{54}`$ erg (GRB 990123). In addition, GRB time profiles and spectra are very diverse and separate into at least two classes - the “short-hard” bursts (with average duration 0.3 s) and the “long-soft” bursts (average duration 20 s). The first challenge any model builder must confront is deciding just which GRBs, and which features, he or she is attempting to model since it is increasingly doubtful that all GRBs are to be explained in the same way. Moreover, in all of today’s models, the gamma-rays observed from a cosmological GRB are produced far from the site where the energy is initially liberated - presumably conveyed there by relativistic outflow or jets. How much of what we see in GRB reflects the central engine and how much the environment where the outflow dissipates its energy? So our first step is to define the problem we are attempting to address. Even after the dramatic progress of the last two years, few definitive statements can be made about GRBs without provoking controversy. Still, in 1999, most people feel that the following are facets of a common GRB that the central engine must provide: 1) Highly relativistic outflow - $`\mathrm{\Gamma }\text{ }>\text{ }100`$, possibly highly collimated. 2) An event rate that, at the BATSE threshold and in the BATSE energy range, is about 1/day. Beaming, of course, raises this number appreciably. 3) Total energy in relativistic ejecta $`10^{53}10^{54}ϵ_\gamma ^1f_\mathrm{\Omega }`$ erg where $`ϵ_\gamma `$ is the efficiency for turning relativistic outflow into gamma-rays ($`10`$%?), and $`f_\mathrm{\Omega }`$ is the fraction of the sky into which that part of the flow having sufficiently high $`\mathrm{\Gamma }`$ ($`>`$ $``$100) is collimated ($``$1%?). For reasonable values of these parameters, the total energy required for a common GRB is 10<sup>52</sup> erg. Fainter GRBs can result from the same 10<sup>52</sup> erg event if the efficiency for producing relativistic matter is reduced (e.g., GRB 980425); brighter ones if the collimation is tighter. 4) A duration of relativistic flow in our direction no longer than the duration of the GRB. This constraint is highly restrictive for the short (0.3 s) bursts and may imply multiple models. For GRB models produced by internal shocks, the flow may additionally need to last as long as the GRB (modulo the relativistic time dilation). This makes a natural time scale $`10`$ s attractive. 5) In the case of long bursts, association with star forming regions in galaxies and, in perhaps three cases, with supernovae of Type I. The near coincidence of 10<sup>52</sup> erg with the energy released in the gravitational collapse of a stellar mass object to a neutron star (or, equivalently, the accretion disk of a black hole), has long suggested a link between GRBs and neutron star or black hole formation, a connection championed by Paczynski before cosmological models became fashionable. Viable models separate into three categories (Table 1), where $`ϵ_{\mathrm{MHD}}`$ is the unknown efficiency for magnetohydrodynamical processes to convert either gravitational accretion energy at the last stable orbit ($`0.1\dot{M}c^2`$) or neutron star rotational energy into relativistic outflow. Those using black hole accretion Eberl99 ; Mac99 ; Ross99 ; Mes99 typically employ 1 - 10% for $`ϵ_{\mathrm{MHD}}`$; pulsar advocates Usov94 ; Wheel99 need approximately 100%. The collapsar model is incapable of producing relativistic jets of total duration less than a few seconds (hence short hard bursts are difficult - impossible unless the beam orientation wanders). Merging neutron stars and black holes, on the other hand, can produce short bursts if the disk viscosity is high (i.e., $`\alpha 0.1`$), but cannot, with the same disk viscosity, produce long bursts. Merging neutron stars also lack the massive disks that help to focus the outflow in collapsars and it may be more difficult for them to emit highly collimated jets. Hence their “equivalent isotropic energies” may be smaller (unless MHD collimation dominates). It seems more natural to associate the merging compact objects with short hard bursts, but this conjecture presently lacks any observational basis. Hopefully future observations, with e.g. HETE-2, will clarify whether short bursts are associated with host galaxies in the same way as the long ones. The pulsar based models have not been studied nearly as extensively as either the collapsar or merging compact objects, perhaps because the MHD phenomena they rely on are difficult to simulate numerically. The magnetic fields and rotation rates invoked for the pulsar models, though large (P $``$ ms; B $``$ 10<sup>15</sup> gauss), are not much greater than employed for the disk in MHD collapsar models. However, it is not at all clear how such models would make the large mass of <sup>56</sup>Ni inferred for SN 1998bw or the highly collimated flow required to explain energetic events like GRB 990123. Also, the bare pulsar version of the modelUsov94 ignores the effects of neutrino-powered winds and the supernova-based version Wheel99 ignores the collapse of the massive star that would continue, at least at some angles, during the few seconds it takes the pulsar to acquire its large field. A complete calculation of the implosion of the iron core of a massive star, including the coupled effects of rotation, magnetic fields, and neutrinos has not been done, but could be in the next decade. ## Collapsars - Type 1 We thus consider here a model that can, in principle, satisfy the five constraints above, at least for long bursts, and has the added virtue of being calculable, with a few assumptions, on current computers. A collapsar is a massive star whose iron core has collapsed to a black hole that is continuing to accrete at a very high rate. The matter that it accretes, that is the helium and heavy elements outside the iron core, is further assumed to have sufficient angular momentum ($`j10^{16}10^{17}`$ cm<sup>2</sup> s<sup>-1</sup>) to form a centrifugally supported disk outside the last stable orbit. The black hole is either born with, or rapidly acquires a large Kerr parameter. It may also be possible to create a situation quite similar to a collapsar in the merger of the helium core of a massive star with a black hole or neutron star Fry98 . What follows has been discussed in the literature Mac99 ; MWH99 ; Aloy99 . The black hole accretes matter along it rotational axis until the polar density declines appreciably. Accretion is impeded in the equatorial plane by rotation. The accretion rate through the disk is insensitive to the disk viscosity because a steady state is rapidly set up in which matter falls into the hole at a rate balancing what is provided by stellar collapse at the outer boundary. The mass of the accretion disk is inversely proportional to the disk viscosity and accretion rates 0.01 - 0.1 M s are typical during the first 20 s as the black hole grows from about 3 M to about 4 or 5 M. The accretion rate may be highly time variable down to intervals as short as 50 ms Mac99 , and an appreciable fraction of the matter passing through the disk is ejected as a powerful “wind” that itself carries up to a few $`\times 10^{51}`$ erg and a solar mass Mac99 ; Stone99 . Given the high temperature in the disk, this disk wind will, after some recombination, probably be mostly <sup>56</sup>Ni. This may be the origin of the light curve of SN 1998bw and other supernovae associated with GRBs. Disk accretion also provides an energy source for jets. In the simplest, but perhaps least efficient version of the collapsar model, energy is transported from the very hot ($`5`$ MeV) inner disk to the rotational axis by neutrinos. Neutrinos arise from the capture of electron-positron pairs on nucleons in the disk and deposit a small fraction of their energy, $``$1%, along the axis where the geometry is favorable for neutrino annihilation. The efficiency factor for neutrino energy transport is a sensitive function of the accretion rate, black hole mass and Kerr parameter, and the disk viscosity Pop99 . Only in cases where the accretion rate exceeds about 0.05 M s<sup>-1</sup> for black hole masses 3 - 5 M and disk viscosities, $`\alpha `$ 0.1, will neutrino transport be significant. Using the actual accretion rate, Kerr parameter, hole mass as a function of time, and $`\alpha 0.1`$, MacFadyen finds for a helium core of 14 M, a total energy available for jet formation up to $``$10<sup>52</sup> erg. The typical time scale for the duration of the jet, and a lower bound for the duration of the GRB, is $``$10 s, the dynamical time scale for the helium core. In addition to any neutrino energy transport, one has the possibility of magnetohydrodynamical processes which could, in principle, efficiently convert a large fraction of the binding energy at the last stable orbit, up to 42% $`\dot{M}c^2`$, into jet energy. Adopting a more conservative value, $`ϵ_{\mathrm{MHD}}`$ 1% MWH99 , one still obtains 10<sup>52</sup> \- 10<sup>53</sup> erg available for jet formation. Dumping this much energy into the natural funnel-shaped channel that develops when a rotating star collapses gives rise to a hydrodynamically collimated jet focused into $``$1% of the sky Mac99 ; MWH99 ; Aloy99 . Magnetic collimation though uncertain, could, in principle, increase the collimation factor still further. Thus jets of equivalent isotropic energy 10<sup>54</sup>, and possibly 10<sup>55</sup> erg (if, e.g., $`ϵ_{\mathrm{MHD}}0.1`$) seem feasible in this model. The event rate of collapsars is also adequate FWH99 . The collapsar model also makes several “predictions” some of which have already been confirmed (these same predictions were inherent in the original 1993 model Woo93 . First, the GRB should originate from massive stars, in fact the most massive stars, and be associated with star forming regions. In fact, given the need for large helium core mass, collapsars may be favored not only by rapid star formation, but also by low metallicity. This reduces the loss of both mass and of angular momentum. Pre-explosive mass loss also provides a natural explanation for the surrounding medium needed to make the GRB afterglows and makes a prediction that the density decline as r<sup>-2</sup>. The GRB duration, $``$ 10 s, corresponds to the collapse time scale of the helium core. The explosion is expected to be highly collimated, though just how collimated was not realized until 1998 Mac99 . The jet blows up the star in which it is made so one expects some kind of supernova. Since the presence of a massive hydrogen envelope prohibits making a strong GRB, the supernova must be of Type I (a possible exception would be an extreme Type IIb supernova, one that had lost all but a trace of hydrogen n its surface). That the explosion might also produce a lot of <sup>56</sup>Ni from a disk powered wind was not appreciated until Mac99 . Without the <sup>56</sup>Ni, the supernova would have been very dim, which is why I originally referred to the collapsar model as a “failed supernova”. It also seems natural that both the variable accretion rate Mac99 and the hydrodynamical interaction of the jet with the star which it penetrates may introduce temporal structure into the burst. Implications for GRB diversity are discussed in $`\mathrm{\S }`$4. ## Collapsars - Type 2 It is also possible to produce a collapsar in a delayed fashion by fallback in an otherwise successful supernova MWH99 . A spherically symmetric explosion is launched in the usual way by neutron star formation and neutrino energy transport, but the supernova shock has inadequate strength to explode the whole star. Over a period of minutes to hours a variable amount of mass, $``$ 0.1 to 5 M, falls back into the collapsed remnant, often turning it into a black hole WW95 and establishing an accretion disk. The accretion rate, $``$0.001 to 0.01 M s<sup>-1</sup>, is inadequate to produce a jet mediated by neutrino annihilation Pop99 , but MHD processes may still function with the same efficiency as in the Type 1 collapsar (or merging neutron stars, for that matter). Then the total energy depends not on the accretion rate, but the total mass that reimplodes. For 1 Mand $`ϵ_{\mathrm{MHD}}`$ = 1%, this is still 10<sup>52</sup> erg. A key difference is the time scale, now typically 10 - 100 times longer. Thus the most likely outcome of a Type 2 collapsar in a star that has lost its hydrogen envelope is a less luminous, but longer lasting GRB. Indeed, there exist GRBs that have lasted hundreds of seconds and there may be a class of longer, fainter GRBs awaiting detection. Since black holes may be more frequently produced by fall back than by failure of the central engine Fry99 , these sorts of events might even be more common than ordinary GRBs. Both kinds of collapsars can also occur in stars that have not lost their envelopes. Stars with lower metallicity have less radiative mass loss so that solitary stars (or widely detached binaries) might also end their lives with both a rapidly rotating massive helium and a hydrogen envelope. Because the motion of the jet head through the star is sub-relativistic Aloy99 and because fall back only maintains a high accretion rate for 100 - 1000 s, highly relativistic jets will not escape red supergiants with radii $`>`$ $``$10<sup>13</sup> cm. What happens in more compact blue supergiants is less certain. Generally speaking, the largest fall back masses will characterize the weakest supernova explosions and also have the shortest fall back time scales. With a jet head speed of 10<sup>10</sup> cm s<sup>-1</sup>, it would have taken 300 s, for example, to cross the blue progenitor of SN 1987A. The fall back mass in 87A is believed to have been $`<`$ $``$0.1 M, probably inadequate to turn the neutron star into a black hole and certainly too little to make a powerful GRB, but perhaps enough to make a jet anyway - or at least cause some mixing. Larger mass helium cores (87A was 6 M) might have more fall back though, definitely making black holes and more energetic jets. Whether the jet can still have a large Lorentz factor remains to be calculated. Even if they do not make GRBs, collapsar powered jets in blue and red supergiant stars may still lead to very energetic, asymmetric supernova explosions, possibly accompanied by large <sup>56</sup>Ni production and luminous soft x-ray transients due to shock breakout MWH99 . These transients may have luminosity up to $`10^{49}`$ erg s<sup>-1</sup> times the fraction of the sky to which high energy material is ejected (typically 0.01) and color temperatures of $`2\times 10^6`$ K. ## GRB Diversity As previously noted, the inferred total energy in gamma-rays for those GRBs whose distances have been determined is quite diverse. One appealing aspect of the collapsar model is that its outcome is sufficiently variable to explain this diversity. The observed burst intensity is sensitive not only to the jet’s total energy, but also to the fraction of that energy in the observer’s direction that has Lorentz factor $`\mathrm{\Gamma }`$ above some critical value ($``$100). Most collapsars accrete about the same mass, 1 - 3 M, before accretion is truncated by the explosion of the star. For an efficiency factor of 1%, this implies a total jet (and disk wind) energy of $``$ few $`\times 10^{52}`$ erg. However, depending on the initial collimation of the jet, its internal energy (or equivalently the ratio of its pressure to its kinetic energy flux), and its duration, very different outcomes can result. A poorly collimated jet, or one that loses its energy source before breaking through the surface of the star may only eject a little mildly relativistic matter and make, e.g., GRB 980425. A focused, low entropy jet that lasts $``$10 s after it has broken free of its stellar cocoon might make GRB 990123. Duration can be affected by such things as the presupernova mass and angular momentum distribution. Internal energy depends on details of the jet acceleration. Neutrino powered jets, for example, have much higher internal energies than some MHD jets and may be harder to focus. Hydrodynamical focusing of the jet also depends on the density distribution in the inner disk, which in turn depends on disk viscosity and accretion rate. And of course the efficiency factor need not always be 1%, e.g., for neutrino-powered models and MHD models. Calculations MWH99 ; Aloy99 illustrate this. Fig. 1 shows the “equivalent isotropic kinetic energy” as a function of polar angle for three models having the same total jet energy, $`3\times 10^{51}`$ erg, at the base. All models except the dot-dash line for J22 are shown 400 s after the initiation of the jet, well after it has broken out of the helium core. The three models differ only in the ratio of internal energy to kinetic energy given to the jet at its base. Yet, even for a constant viewing angle, $`\theta =0`$, , the inferred isotropic energies vary by an order of magnitude. Larger variations are possible if one goes to other values of viewing angle - not because the GRB is being viewed “from the side”, but because the material coming at the observer has both less energy and a lower Lorentz factor. Thus it is also possible that GRB 980425 was a more typical GRB viewed off axis Nak99 , but not in the sense of a single highly relativistic beam which emitted a few photons in our direction. Instead we saw emission from matter coming towards us with a lower $`\mathrm{\Gamma }`$. Fig. 2 is not the result of any current calculation, but just a sketch to illustrate what calculations may ultimately show. (See, for comparison, Fig. 4 of Aloy99 , a first pass at one collapsar model using a code with the necessary relativistic hydrodynamics. Unfortunately this calculation has not yet been run long enough to show the final distribution of Lorentz factors). There is a large concentration of mass, $``$10 M, moving at sub-relativistic speeds. This is the supernova produced by the jet passing through the star. Though the speed is “slow”, most of the energy may be concentrated here if the jet did not last long enough or stay focused enough to become highly relativistic (dotted line). Then there is a relativistic ‘tail” to the ejecta. Even though it is a small fraction of the mass, this tail could, in some cases, namely the common GRBs, contain most of the energy in the explosion. Table 2 indicates some of the diverse outcomes that might arise. Here R<sub>15</sub> is an approximate radius in units of 10<sup>15</sup> cm where the material might give up its energy. A typical Wolf-Rayet mass loss rate has been assumed for those cases where external shocks are clearly important (x-ray afterglows and GRB 980425). Supernovae also typically have a photospheric radius of 10<sup>15</sup> cm. $`\mathrm{\Omega }/4\pi `$ is the fraction of the sky into which the mass is beamed. The fractions sum to over 100% because the supernova is not beamed. ## Time Variability, Lag Time, and Luminosity At this meeting we also heard of two fascinating results with important implications for the use of GRBs as calibrated “standard candles” for cosmology. Ramirez-Ruiz and Fenimore (Paper T-04) discussed a correlation between “variability” and luminosity. The more rapidly variable the light curve, the higher the absolute luminosity. Norris, Marani, & Bonnell Norris99 also showed data to support a high degree of (anti-)correlation between absolute luminosity and the “time lag”, the delay time between the arrival of hard and soft-subpulses. The shorter the lag, the brighter the burst. Both these effects may be understood as an outcome of Fig. 1. The bursts for which we infer the highest luminosities are those that are observed straight down the axis of the jet, $`\theta =0`$. This is also the angle at which we see the largest Lorentz factors. Slightly away from $`\theta =0`$, both the equivalent isotropic energy and $`\mathrm{\Gamma }`$ drop precipitously. For larger Lorentz factors, the burst will be produced closer to the source. Ref. Pan97 gives a thinning radius where the GRB becomes optically thin to Thomson scattering that is proportional to $`\mathrm{\Gamma }^{1/2}`$. The distance where internal shocks form from two shells having Lorentz factors $`\mathrm{\Gamma }_1`$ and $`\mathrm{\Gamma }_2`$ is $`\mathrm{\Gamma }_1\mathrm{\Gamma }_2c\mathrm{\Delta }t`$. For smaller radii and larger $`\mathrm{\Gamma }`$, time scales will thus be contracted. That is larger $`\mathrm{\Gamma }`$ may imply more time structure on shorter scales and perhaps reduced lag times as well. Then variability and time lags would be related to the equivalent isotropic energy because both are functions of the viewing angle. GRB 980425 is an exception since its GRB was produced by a external shock interaction between mildly relativistic matter and the presupernova mass loss.
no-problem/9912/math-ph9912015.html
ar5iv
text
# Variational procedure and generalized Lanczos recursion for small-amplitude classical oscillations ## Abstract Variational procedure is developed that yields lowest frequencies of small-amplitude oscillations of classical Hamiltonian systems. Genuine Lanczos recursion is generalized to treat related non-Hermitian eigenvalue problems. Normal modes $`\xi `$ and frequencies $`\omega `$ of small oscillations of a classical system near the equilibrium are determined by the secular equation $$\omega ^2M\xi =K\xi ,$$ (1) where $`M`$ and $`K`$ are $`N\times N`$ symmetric positive definite matrices of mass coefficients and spring constants respectively. In many applications the number $`N`$ of degrees of freedom is large, while only a few lowest frequencies are of interest . Equation (1) represents a problem more complex than a regular symmetric eigenvalue problem, unless $`M`$ or $`K`$ is diagonal. Equation (1) can be transformed into the Hamiltonian form by introducing canonical momentum $`\eta =\omega M\xi `$: $$K\xi =\omega \eta ,T\eta =\omega \xi ,$$ (2) where $`T=M^1`$. Thus, the frequencies of the normal modes are the eigenvalues of a $`2N\times 2N`$ matrix $$\left(\begin{array}{cc}0& T\\ K& 0\end{array}\right)$$ (3) The spectrum of this matrix consists of pairs $`\pm \omega `$, since $`(\xi ,\eta )`$ is also a solution of (2) that corresponds to $`\omega `$. The lowest frequency $`\omega _{\mathrm{min}}`$ is the lowest positive eigenvalue of the matrix (3). Although the eigenvalues of the matrix (3) are always real, the matrix itself is non-Hermitian, unless $`K=T`$. Therefore, its diagonalization poses a formidable task. The major problem is that no general minimum principle exists that yields eigenvalues of arbitrary diagonalizable non-Hermitian matrices. This does not allow to formulate a variational procedure similar to the Rayleigh-Ritz procedure for Hermitian matrices. If $`K=T`$, the matrix (3) is Hermitian, and its positive eigenvalues coincide with those of $`K`$ and $`T`$. As known from quantum mechanics, the lowest eigenvalue $`ϵ_{\mathrm{min}}`$ of a Hermitian matrix $`H`$ can be obtained from the minimum principle $$ϵ_{\mathrm{min}}=\underset{\{\psi \}}{\mathrm{min}}\frac{(\psi H\psi )}{(\psi \psi )}.$$ (4) The minimum is to be searched over all vectors $`\psi `$. The Ritz variational procedure is an approximation when the set $`\{\psi \}`$ in (4) is restricted to some subspace $`𝒦`$ of dimension $`n<N`$. The best approximation to $`ϵ_{\mathrm{min}}`$ in the sense of (4) is obtained as the lowest eigenvalue of the $`n\times n`$ Rayleigh matrix $`\stackrel{~}{H}`$, obtained by projection of $`H`$ onto $`𝒦`$. The special paired structure of the matrix (3) makes it possible to generalize (4) such as to yield $`\omega _{\mathrm{min}}`$. In fact, $$\omega _{\mathrm{min}}=\underset{\{\xi ,\eta \}}{\mathrm{min}}\frac{(\xi K\xi )+(\eta T\eta )}{2\left|(\xi \eta )\right|}.$$ (5) The minimum is to be searched over all possible phase space configurations $`\{\xi ,\eta \}`$. Before providing the proof to this equation let me point out some of its features. First, it states that $`\omega _{\mathrm{min}}`$ is the minimum harmonic part of the total energy, $`(\xi K\xi )/2+(\eta T\eta )/2`$, over the phase space configurations normalized by $`(\xi \eta )=1`$. Since $`K`$ and $`T`$ are both positive definite, the right-hand side is strictly positive and so is $`\omega _{\mathrm{min}}`$. Second, equation (5) is symmetric in $`K`$ and $`T`$, according to the nature of the problem. When $`K=T`$ the minimum is achieved at $`\xi =\eta `$, and (5) becomes the same as (4). Note that the functional in (5) has no maximum, since the denominator can be made arbitrarily small. The global minimum, however, always exists. This is not obvious, since a set of all pairs of vectors with $`(\xi \eta )=1`$ is not compact. Indeed, say, any vector orthogonal to $`\eta `$ can be added to $`\xi `$, making $`|\xi |`$ arbitrarily large. However, the functional in (5) grows indefinitely in this case, such that the global minimum is achieved at finite $`|\xi |`$ and $`|\eta |`$. Variation of (5) with respect to $`\xi `$ and $`\eta `$ yields equations (2). Thus, the solutions of (2) are the stationary points of (5). The global minimum (5), therefore, gives indeed $`\omega _{\mathrm{min}}`$. The singularity in the denominator poses no problem, since it corresponds to infinitely large values of functional, while near the minimum it is analytic. Minimum principle (5) can, in fact, be obtained from the Thouless minimum principle , derived for non-Hermitian matrices that appear in random phase approximation (RPA). Equation (5) transforms into the Thouless minimum principle by substitution: $`A=(K+T)/2`$, $`B=(KT)/2`$, $`x=(\xi +\eta )/2`$, and $`y=(\xi \eta )/2`$. Variational procedure similar to the Rayleigh-Ritz procedure can be formulated if coordinates $`\xi `$ and momenta $`\eta `$ in (5) are restricted to some subspaces $`𝒰`$ and $`𝒱`$ of dimension $`n`$, respectively. Let $`\{\xi _i\}`$ and $`\{\eta _i\}`$ be two sets of vectors that span $`𝒰`$ and $`𝒱`$, such that $`(\xi _i\eta _j)=\delta _{ij}`$. Expanding $`\xi =u_i\xi _i`$, $`\eta =v_i\eta _i`$ and varying (5) with respect to $`u_i`$ and $`v_i`$, we find the latter to obey a $`2n\times 2n`$ eigenvalue equation $$\left(\begin{array}{cc}0& \stackrel{~}{T}\\ \stackrel{~}{K}& 0\end{array}\right)\left(\begin{array}{c}u\\ v\end{array}\right)=\stackrel{~}{\omega }\left(\begin{array}{c}u\\ v\end{array}\right),$$ (6) with $`\stackrel{~}{K}_{ij}=(\xi _iK\xi _j)`$ and $`\stackrel{~}{T}_{ij}=(\eta _iT\eta _j)`$. Equation (6) generalizes Hermitian Rayleigh-Ritz eigenvalue equation for $`\stackrel{~}{H}`$. It has $`2n`$ solutions $`\pm \stackrel{~}{\omega }`$, the lowest positive of which gives the best approximation to $`\omega _{\mathrm{min}}`$ in the sense of equation (5). Krylov subspace for the matrix (3) can be constructed by acting with it many times on an arbitrary vector $`(\xi _1,\eta _1)`$: $$\left(\begin{array}{c}\xi _1\\ \eta _1\end{array}\right),\left(\begin{array}{c}T\eta _1\\ K\xi _1\end{array}\right),\left(\begin{array}{c}TK\xi _1\\ KT\eta _1\end{array}\right),\mathrm{}$$ (7) The subspace that spans first $`n`$ vectors of this sequence has the property of approximating an invariant subspace of (3). Thus, it is natural to expand approximation to an eigenvector of (3) as a linear combination of these vectors. In other words, the natural choice for the subspaces $`𝒰`$ and $`𝒱`$ for the variational procedure described above are the subspaces $`𝒰_n`$ and $`𝒱_n`$ that span the upper and lower components of first $`n`$ vectors of (7). In order to implement the variational procedure, it is necessary to construct a biorthogonal basis $`\{\xi _i,\eta _i\}`$, $`i=1,\mathrm{},n`$ in $`𝒰_n`$ and $`𝒱_n`$ and compute matrix elements of $`\stackrel{~}{K}`$ and $`\stackrel{~}{T}`$. Both tasks can be performed simultaneously using the following recursion: $`\xi _{i+1}`$ $`=`$ $`\beta _{i+1}^1(T\eta _i\alpha _i\xi _i\beta _i\xi _{i1})`$ (9) $`\eta _{i+1}`$ $`=`$ $`\delta _{i+1}^1(K\xi _i\gamma _i\eta _i\delta _i\eta _{i1}).`$ (10) The four coefficients $`\alpha _i`$, $`\beta _i`$, $`\gamma _i`$, and $`\delta _i`$ are to be chosen at each step $`i`$ such as to make $`\xi _{i+1}`$ orthogonal to $`\eta _i`$ and $`\eta _{i1}`$, and $`\eta _{i+1}`$ — orthogonal to $`\xi _i`$ and $`\xi _{i1}`$. This appears to be enough to ensure global biorthogonality $`(\xi _i\eta _j)=\delta _{ij}`$. Indeed, assume biorthogonality to hold up to step $`i`$. Multiplying (9) by $`\eta _j`$, $`j<i1`$, we have $`(\eta _j\xi _{i+1})(\eta _jT\eta _i)=(\eta _iT\eta _j)=0`$ due to Hermiticity of $`T`$ and the fact that $`T\eta _j`$ is a linear combination of all $`\xi _k`$ with $`kj+1<i`$. Thus, the biorthogonality also holds for the step $`i+1`$. Multiplying (9) by $`\eta _{i1}`$, $`\eta _i`$, and $`\eta _{i+1}`$ and using biorthogonality, we get $`(\xi _i\eta _i)=1`$, $`\stackrel{~}{K}_{ii}=\alpha _i`$, and $`\stackrel{~}{K}_{i,i1}=\stackrel{~}{K}_{i1,i}=\beta _i`$. Similarly, $`\stackrel{~}{T}_{ii}=\gamma _i`$ and $`\stackrel{~}{T}_{i,i1}=\stackrel{~}{T}_{i1,i}=\delta _i`$. All other matrix elements of $`\stackrel{~}{K}`$ and $`\stackrel{~}{T}`$ vanish. The recursion (Variational procedure and generalized Lanczos recursion for small-amplitude classical oscillations) is a straightforward generalization of the Hermitian Lanczos recursion $$\psi _{i+1}=\beta _{i+1}^1(H\psi _i\alpha _i\psi _i\beta _i\psi _{i1})$$ (11) applicable to any Hermitian matrix $`H`$. When $`K=T`$ and $`\xi _1=\eta _1`$ both equations (Variational procedure and generalized Lanczos recursion for small-amplitude classical oscillations) coincide with each other and with equation (11), up to the notation. As in the case of the Hermitian Lanczos algorithm, several lowest frequencies can be found one by one by projecting the $`\xi `$ and $`\eta `$-components of converged eigenvectors out of $`𝒱_n`$ and $`𝒰_n`$ subspaces respectively. The method was tested on a set of large sparse random matrices of the form (3). Symmetric matrices $`T`$ and $`K`$ were generated to have an average of 40 randomly distributed and randomly positioned matrix elements in each row. Both $`K`$ and $`T`$ were shifted by an appropriate constant to ensure positive definiteness. Figure 1 demonstrates the convergence results for a matrix of the size $`2N=200000`$. For smaller matrices up to $`2N=2000`$, where it was possible to obtain all eigenvalues with regular methods, the present method has converged to the true lowest frequency in all instances. In conclusion, the method is proposed that generalizes Rayleigh-Ritz variational procedure and Lanczos recursion to the case of non-Hermitian matrices of the form (3), that determine normal modes and frequencies of small-amplitude oscillations of Hamiltonian systems. Equations (2) have numerous applications beyond purely mechanical problems. Schroedinger equation in non-orthogonal basis represents a generalized symmetric eigenvalue problem similar to (1). RPA and other time-dependent techniques in nuclear physics and quantum chemistry lead to the equations similar to (2) . At last, eigenvectors of so-called Hamiltonian matrices, to which (3) is a special case, solve the nonlinear algebraic Riccati equation which appears in the theory of stability and optimal control . I would like to acknowledge numerous enlightening discussions with Vladimir Chernyak during my appointment at the University of Rochester.
no-problem/9912/cond-mat9912314.html
ar5iv
text
# Nodal Quasiparticles versus Phase Fluctuations in High Tc Superconductors: An Intermediate Scenario ## 1 Introduction Within the pseudogap regime of the cuprates, it has been widely argued that the excitations of the superconducting state are predominantly fermionic or predominantly bosonic in character. We have pointed out that there is a third scenario which is no less likely and which, at the least, needs to be considered on an equal footing. This third scenario emerges when one studies the BCS Bose-Einstein condensation (BEC) crossover picture for $`0TT_c`$. At weak coupling (when the coherence length $`\xi `$ is large) the excitations are dominantly fermionic, and at very strong coupling (small $`\xi `$) they are dominantly bosonic. At intermediate coupling, which is likely to be appropriate to the cuprates, the excitations are of mixed character. In this paper we discuss the implications of this third scenario, in the context of semi-quantitative comparisons with experiment, for the superfluid density and specific heat. There have been a number of recent papers which have presented similar comparisons within the context of the “ fermionic” or nodal quasi-particle picture, extended, however, to a Fermi liquid based interpretation. This differs conceptually from the original formulation of Lee and Wen because the important Mott insulator constraint ($`\omega _p^2x`$) is enforced via a hole concentration ($`x`$) dependent Landau parameter $`F_1^s`$ which then constrains the penetration depth $`\lambda (x)`$ via $`d\lambda ^2/dTx^2`$. These $`x`$ dependences are considerably different from those of the spin-charge separation approach which is based on the presumption that $`d\lambda ^2/dT`$ is $`x`$ independent. An underlying philosophy of the present paper is that fundamental features of the pseudogap phase should be accommodated at the outset, before including Landau parameter effects. Our viewpoint is different from the phenomenology of Lee and Wen because here $`T_c`$ is associated with both nodal quasi-particles and bosonic pair excitations. Stated alternatively, in the BCS-BEC crossover picture there is an important distinction between the order parameter $`\mathrm{\Delta }_{sc}`$ and excitation gap $`\mathrm{\Delta }`$ at all temperatures. This approach leads to a new mean field-like theory which incorporates (i) the usual BCS equation for $`\mathrm{\Delta }(T)`$ and for (ii) the chemical potential $`\mu (T)`$, along with a third new equation for (iii) $`\mathrm{\Delta }^2\mathrm{\Delta }_{sc}^2`$ which is related to the number of thermally excited pair (bosonic) excitations. The first two equations enforce an underlying fermionic constraint so that the bosons of the strong coupling limit are different from those of a true boson system, such as He<sup>4</sup>. To include the Mott insulator constraint the Fermi velocity $`v_F`$ must then be $`x`$-dependent, as shown in Figure 1a (along with experimental data), in a way which directly reflects the $`x`$-dependence of $`\lambda (T=0)=\lambda _o`$, shown in the inset to Figure 1c. Here the parameters were chosen to give a reasonable fit to the measured phase diagram for the YBaCuO system. ## 2 Penetration Depth and Specific Heat In order to calculate the penetration depth and specific heat within the present approach, the fermionic contributions must be quantified, just as in the nodal quasi-particle picture. This contribution is accompanied by an additive bosonic component. So as not to complicate the logic, we compute the former, by taking the second velocity contribution $`v_2`$ as given by a perfect $`d`$-wave model; thus the $`x`$ dependence of $`v_2`$ entirely reflects that of $`\mathrm{\Delta }(x)`$, as shown in Fig. 1b, which is in slight disagreement with the data indicated in 1b. The resulting values for the inverse squared penetration depth are plotted in Fig. 1c, along with a collection of experimental data. The predicted increase at large $`x`$ is a reflection of the behavior of $`\mathrm{\Delta }(x)`$. Given the spread in the data for all quantities indicated in Fig. 1, it would appear that there are no obvious inconsistencies. In Fig. 2 we show the $`x`$ dependent coefficient of the quadratic term (2a) in the specific heat $`C_v=\gamma ^{}T+\alpha T^2`$, along with the linear term (2b). The first of these reflects the fermionic quasi-particle contribution (which depends on $`v_2`$ and $`v_F`$) and the second derives purely from the bosonic contribution. Also indicated are a collection of experimental data on three different cuprates. We know of no other intrinsic origin for this $`\gamma ^{}`$ term, which despite its widespread presence is usually attributed to extrinsic effects. The upturn at large $`x`$ in the $`\gamma ^{}`$ data, is of no concern, since it is a reflection of the normal state behavior (shown more completely in the inset). For overdoped samples, at the lowest $`T_c0`$, extrinsic,e.g., paramagnetic impurity, effects make it difficult to observe $`\alpha `$ and $`\gamma ^{}`$ in the intrinsic superconducting state.
no-problem/9912/hep-ph9912244.html
ar5iv
text
# Untitled Document ELECTRIC CHARGE, EARLY UNIVERSE AND THE SUPERSTRING THEORIES AFSAR ABBAS Institute of Physics, Bhubaneswar-751005, India (e-mail : afsar@iopb.res.in) Abstract Very recently, it has been shown by the author that the Standard Model Higgs cannot be a physical particle. Here, on most general grounds it is established that as per the Standard Model there is no electric charge above the electro-weak phase transition temperature. Hence there was no electric charge present in the early universe. The Superstring Theories are flawed in as much as they are incompatible with this requirement. Hence the Superstring Theories are inconsistent with this basic structure and requirement of the Standard Model. Contrary to earlier expectations, it has recently been demonstrated that electric charge is quantized in the Standard Model (SM) \[ 1,2 \]. This quantization requires complete machinery of the SM including the Higgs. If one or more conditions are not properly taken care of it may lead to problems \[ 2 \]. For example, analyses of the property of the electric charge quantization in the SM, have led some authors to conclude that millicharged particles are permitted by the SM. This is erroneous and was pointed out recently by the author \[ 3 \]. Besides other built-in features of the SM that go into the demonstration of the electric charge quantization, one crucial feature was spontaneous symmetry breaking through a Higgs doublet \[ 1,2 \]. Using this property, it was shown that when the electro-weak symmetry is restored there is no electric charge \[ 4 \]. Hence there was no electric charge in the early universe. In this demonstration one needs a Higgs doublet, as is present in the SM. Very recently \[ 5 \], the author has shown that the SM Higgs is not a physical particle. It is a manifestation of the ‘vacuum’ which gives the basic structure to the SM. In this demonstration \[ 5 \] the Higgs was not a doublet but had unconstrained isospin and hypercharge representations (see \[ 5 \] for details ). Here we ask the question, with this generalized picture what happens to the electric charge when the symmetry is restored. How does this result reflect upon the well studied and the ever-promising Superstring Theories ? It has been shown by the author \[ 5 \] that in SM $`SU(N_C)SU(2)_L\times U(1)_Y`$ for $`N_C=3`$ spontaneous symmetry breaking by a Higgs of weak hypercharge $`Y_\varphi `$ and general isospin T where $`T_3^\varphi `$ component develops the vacuum expectation value $`<\varphi >_0`$, fixes ‘h’ in the electric charge definition $`Q=T_3+hY`$ to give $$Q=T_3\frac{T_3^\varphi }{Y_\varphi }Y$$ (1) where Y is the hypercharge for doublets and singlets for a single generation. For each generation renormalizability through triangular anomaly cancellation and the requirement of the identity of L- and R-handed charges in $`U(1)_{em}`$ one finds that all unknown hypercharges are proportional to $`\frac{Y_\varphi }{T_3^\varphi }`$. Hence correct charges (for $`N_C=3`$ ) fall through as below $`Q(u)={\displaystyle \frac{1}{2}}(1+{\displaystyle \frac{1}{N_C}})`$ $`Q(d)={\displaystyle \frac{1}{2}}(1+{\displaystyle \frac{1}{N_C}})`$ $`Q(e)=1`$ $`Q(\nu )=0`$ (2) In this demonstration of charge quantization the isospin and the hypercharge of Higgs were left completely unconstrained \[ 5 \]. It was then shown that the complete structure of the Standard Model was reproduced without specifying any quantum numbers of the Higgs. Thus Higgs is unlike any particle known to us. Hence it was predicted \[ 5 \] by the author that the Higgs is not a physical particle. Note that the expression for Q in (1) arose due to spontaneous symmetry breaking of $`SU(N_C)SU(2)_L\times U(1)_Y`$ (for $`N_C=3`$ ) to $`SU(N_C)\times U(1)_{em}`$ through the medium of a Higgs with arbitrary isospin T and hypercharge $`Y_\varphi `$. What happens when at higher temperature, as for example found in the early universe, the $`SU(N_C)SU(2)_LU(1)_Y`$ symmetry is restored ? Then the parameter ‘h’ in the electric charge definition remains undetermined. Note that ‘h’ was fixed as in (1) due to spontaneous symmetry breaking through Higgs. Without it ‘h’ remains unknown. Earlier \[ 4 \] we had found this for a Higgs doublet. Now we find that for a Higgs with any arbitrary isospin and hypercharge, this is still true. As ‘h’ is not defined, the electric charge is not defined. Hence when the electroweak symmetry is restored, irrespective of the Higgs isospin and hypercharge the electric charge disappears as a physical quantity. Hence here too we find that there was no electric charge in the early universe. In fact the property that there was no electric charge in the early universe can be understood this way. The fact that L-charges were equal to the R-charges is a property of $`U(1)_{em}`$. This property need not hold if $`U(1)_{em}`$ is not present. And indeed when the symmetry is restored to $`SU(2)_LU(1)_Y`$ (at higher temperatures and/or early universe) the L-charges and the R-charges (if they exist at all) need not be equal to each other. If different, how different ? These lead to self-contradictions. These contradictions can be prevented only if there were no charges present when $`U(1)_{em}`$ was not present. And this is what has been consistently demonstrated here on most general grounds. Let us now look at the Superstring Theories. What is the structure of electric charge in Superstring Theories and how does it relate to the Standard Model discussed above and in addition to Refs \[ 1-5 \]. The appearances of color neutral string states carrying fractional electric charge (eg. $`q=\pm \frac{1}{2},\pm \frac{1}{3}`$) is a well known problem in string theory \[ 6,7 \]. This appears to be generic in string theories. In string theory, in fact, there are always fractionally charged states unless the group ‘levels’ obey the condition \[ 8 \] $$3k_1+3k_2+4k_3=0mod12$$ (3) This condition is indeed obeyed by the standard GUT choice $`k_1=\frac{5}{3},k_2=k_3=1`$. However, at low energy one would require the SM group $`SU(3)SU(2)U(1)`$ to survive. Here this does not happen. An exact $`SU(5)`$ symmetry survives down to low energies and this is unwelcome. Thus, one has the following possibilities (1) One accepts the canonical possible values of the $`k_i`$’s and try to justify the existence of fractionally charged states at low energies. This is in conflict with some experimental constraints on terrestrial fractional charges. (2) One tries to choose non-canonical $`k_i`$’s. This idea then runs into conflict with other established ideas of extensions beyond the SM. (3) One chooses the canonical value for the $`k_i`$’s and fractionally charged particles are shunted to very high mass states. Hence present experiments cannot touch them. All these are well known problems of the electric charge in the Superstring Theories \[ 9 \]. However here attention is drawn to the fact that all putative extensions of the Standard Model should reduce smoothly and consistently to the Standard Model at low energies. Not only that, all these extensions should be consistent with the predictions of the Standard Model at very high temperatures. Contrary to naive expectations, the SM does make specific predictions at very high temperatures too. For example one clear-cut prediction of the Standard Model as shown here and also shown earlier \[ 4 \], is that at high enough temperatures (as in the early universe) when the unbroken $`SU(3)SU(2)U(1)`$ symmetry is restored, there is no electric charge. GUTs and other standard extensions of the SM are incompatible with this requirement \[ 3,4 \]. What about Superstring Theories ? Quite clearly generically in Superstring Theories electric charge exists right up to the Planck Scale. Hence as per this theory the electric charge, as an inherent property of matter, has existed right from the beginning. This is not correct in the SM. As shown here and earlier \[ 4 \] the electric charge came into existence at a later stage in the evolution of the Universe when the $`SU(2)_LU(1)_Y`$ group was spontaneously broken to $`U(1)_{em}`$. It was never there all the time. This is because electric charge is a derived quantity \[ 4 \]. Hence we find that in this regard the Superstring Theories are inconsistent with the SM. We have shown that the structure of the electric charge and its quantization property in the Standard Model are very restrictive. It turns out that even more demanding and restrictive is the property that there was no electric charge before the spontaneous symmetry breaking of the electro-weak group. Any theory which is incompatible with this requirement cannot be a valid theory of nature. Along with most (all ?) extensions of the SM \[ 3,4 \], Superstring Theories also fall in this category. In spite of their promise, the Superstring Theories have this fatal flaw built into them. REFERENCES 1. A. Abbas, Phys. Lett. B 238 (1990) 344 2. A. Abbas, J. Phys. G. 16 (1990) L163 3. A. Abbas, Physics Today , July 1999, p.81-82 4. A. Abbas, ‘Phase transition in the early universe and charge quantization’; hep-ph/9503496 5. A. Abbas, ‘What is the Standard Model Higgs ?’ ; hep-ph/9912243 6. X. G. Wen and E. Witten, Nucl. Phys. B 261 (1985) 651 7. G. Athanasiu, J.Atick, M.Dine and W. Fischler, Phys. Letts. B 214 (1988) 55 8. A. N. Schellekens, Phys. Letts. B 237 (1990) 363 9. J. Polchinski, ‘String Theory’, Cambridge University Press, 1998