id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/9909/hep-lat9909113.html
ar5iv
text
# 𝑁=1 Super Yang-Mills on the Lattice in the Strong Coupling Limit ## 1 Introduction The pure $`N=1`$ SUSY Yang-Mills (SYM) theory with $`SU(N_c)`$ gauge group is described by the purely gluonic action plus one flavour of Majorana fermions in the adjoint representation of the colour group. The non-perturbative aspects of this class of theories were intensely investigated in the past, and recently there has been a renewed interest on the subject . On the other hand, these theories can be studied on the lattice by using the standard lattice methods. However, when supersymmetric gauge theories are formulated on the lattice, some problems arise due to the fact that the lattice regularization spoils supersymmetry. Nevertheless, in Ref. it is shown how to recover supersymmetry in the continuum limit by using appropriately renormalized operators for the SUSY and chiral currents. Two different collaborations have carried out numerical analysis of the SYM on the lattice (quenched and unquenched respectively) by following the guidelines of Ref. . The main reason for performing an unquenched analysis from the beginning is that in the SYM theory the quenched approximation cannot be justified (in the continuum limit) on the basis of large $`N_c`$ dominance. However, in Ref. it is shown that the results on the spectrum, obtained by employing the quenched approximation, match the supersymmetry predictions within their statistical errors. Our work aims at complementing these numerical studies with analytical information on the strong coupling, large $`N_c`$ region of the theory. While not being the physical (i.e. continuum) regime, this limit allows to extract relevant information on the dynamics of the lattice theory, and supplies a guideline for Monte Carlo simulations. Leaving supersymmetry aside, our results can be regarded as an analysis of the spectrum of Yang-Mills fields coupled to Majorana quarks in the adjoint representation of the group. The large $`N_c`$, strong coupling analysis reported in Refs. was performed in arbitrary space-time dimension. In the present paper, we summarize the main results of Ref. . ## 2 Strong Coupling We begin by writing down the lattice action we will be working with: $$S=\beta S_g+\frac{1}{2}\mathrm{\Psi }_i\mathrm{\Psi }_j𝐌_{ij},$$ (1) where $`\beta S_g`$ is the pure gauge part and $`\mathrm{\Psi }_i`$ is a Grassman variable representing the field of a Majorana fermion. Note that the indices $`i`$ and $`j`$ run over space-time, colour and Dirac indices. The matrix $`𝐌`$ must be antisymmetric and its form (in compact notation) is given by $`𝐌_{ij}=C\left[𝐈\kappa {\displaystyle \underset{\alpha }{}}\delta _{mn+\alpha }U_\alpha (n)(r𝐈\gamma _\alpha )\right]`$ (2) where $`𝐈`$ and $`C`$ are the unit and charge conjugation matrices respectively, $`r`$ is the Wilson parameter, $`\kappa `$ is the hopping parameter and $`n,m`$ parametrize lattice points. The index $`\alpha `$ labels the $`2d`$ possible nearest neighbour steps and their corresponding lattice vectors. For forward steps $`U_\alpha (n)`$ labels the adjoint gauge link variable and $`\gamma _\alpha `$ the Dirac matrices. The lattice strong coupling expansion is an analytical method which provides information on the behaviour of the lattice gauge theories at small values of $`\beta `$ and $`\kappa `$ (hopping parameter). Our precise technique can be summarized as follows: * As customarily done, we replace the Pfaffian $`\text{Pf}(𝐌)`$ (which appears due to the Majorana character of the fermions) by the positive square root of the determinant. This is indeed rigorous for $`|\kappa |<\frac{1}{2d(|r|+1)}`$. * The hopping parameter expansion around the origin allows to express the propagator $`𝐌^\mathrm{𝟏}`$ and the determinant $`\text{det}(𝐌)`$ as sums over paths on the lattice. * The integration over the group is performed at large $`N_c`$. The only leading lattice paths are those with zero area (pure spike paths ). * The leading lattice path resummation is performed for any $`r`$ and in any dimension . In our analysis we keep $`r`$ arbitrary because this allows for the possibility of searching for multicritical points. A first crucial result is that, by taking into account the $`SU(N_c)`$ group integration at large $`N_c`$ , the quenched and OZI approximations turn out to be exact in the large $`N_c`$ limit. ## 3 Results on propagators and spectrum We concentrate upon gauge invariant operators of the form: $$𝐎_i(x)=\mathrm{\Psi }_{A_1}^{a_1}(x)\mathrm{}\mathrm{\Psi }_{A_p}^{a_p}(x)(𝒮_i)_{A_1\mathrm{}A_p}𝒞_i^{a_1\mathrm{}a_p},$$ (3) where $`\{𝒮_i\}`$ and $`\{𝒞_i\}`$ are complete sets of spin and invariant colour tensors, respectively, of rank $`p`$. Note that when $`p=2`$ a basis for $`𝒮_i`$ is provided by the Clifford algebra basis in $`d`$ dimensions. Our specific aim is to compute the following quantities at strong coupling $$𝐎_i(x),G_{ij}(xy)𝐎_i(x)𝐎_j(y),$$ where as usual the $``$ means vacuum expectation value. We compute the main formulas (which are valid at large $`N_c`$ and in any dimension $`d`$) for the propagators of the $`p`$-gluino operators $`G_{ij}(x)`$ obtained after resumming over paths. The main result for propagators is: $`G_{ij}(x)=R_p(\xi ){\displaystyle \underset{\mu }{}}({\displaystyle \frac{d\phi _\mu }{2\pi }})e^{ı\phi x}`$ $`\times 𝒮_i|[\mathrm{\Theta }_p(\xi )𝐈\stackrel{~}{𝐀}_p(\phi )]^1\stackrel{~}{𝐂}_p^1|𝒮_j,`$ (4) where $`𝐈`$ is the unit matrix and $`\stackrel{~}{𝐂}_p`$ $``$ $`\underset{p}{\underset{}{CC\mathrm{}C}}`$ $`\stackrel{~}{𝐀}_p(\phi )`$ $``$ $`\kappa ^p{\displaystyle \underset{\alpha I}{}}e^{ı\phi _\alpha }\underset{p}{\underset{}{(r\gamma _\alpha )\mathrm{}(r\gamma _\alpha )}}`$ $`\mathrm{\Theta }_p(x)`$ $``$ $`(1x)^p+{\displaystyle \frac{x^p}{(2d1)^{p1}}}`$ $`\xi `$ $``$ $`{\displaystyle \frac{1\sqrt{14(2d1)\kappa ^2(r^21)}}{2}},`$ (5) and where $`C`$ is the charge conjugation matrix. The reader is referred to Ref. for the expressions of the functions $`R_{2,3}(x)`$, as well as for the definition of the kets $`|𝒮_i`$. In order to obtain analytical formulas for the propagator and masses, one needs to invert the matrix $`\mathrm{\Theta }_p(\xi )𝐈\stackrel{~}{𝐀}_p(\phi )`$ appearing in Eq.(4). This can be done in an elegant way for arbitrary space-time dimension and $`p=2`$ by mapping the problem into a finite dimensional many-body fermion problem (gamma-fermions ). The results for the scalar mass $`M_S`$ and the lightest pseudoscalar mass $`M_P`$, which would enter the lowest supermultiplet according to the analysis of Veneziano and Yankielowicz , are, for any even dimension, the following: $`\mathrm{cosh}(M_S)`$ $`=`$ $`|\mathrm{\Phi }_2(\xi )\pm (d1)|`$ $`\mathrm{cosh}(M_P)`$ $`=`$ $`\theta \mathrm{\Xi }H`$ $`H`$ $`=`$ $`\sqrt{(\theta ^21)(\mathrm{\Xi }^21)+(d1)^2ϵ^2}`$ $`\mathrm{\Phi }_2(x)`$ $`=`$ $`{\displaystyle \frac{(1x)^2(2d1)+x^2}{2x(1x)}}`$ $`\mathrm{\Xi }`$ $`=`$ $`\mathrm{\Phi }_2(\xi )(d1)r^2ϵ,`$ (6) where the $`\pm `$ in $`\mathrm{cosh}(M_S)`$ is for $`|r|<1`$ and $`|r|>1`$ respectively, and $`ϵ1/(r^21)`$, $`\theta ϵ(r^2+1)`$. The corresponding critical line (where the lightest meson becomes massless) marks the edge of the physical region as well as the boundary of validity of our formulae. The equation for this critical line is $$\mathrm{\Phi }_2(\xi )=d\theta .$$ (7) In odd dimensions (where the pseudoscalar operator does not exist) the lightest state in the 2-gluino sector is a vectorial state and its corresponding mass is given by $$\mathrm{cosh}(M_V)=|\mathrm{\Phi }_2(\xi )\theta (d1)|.$$ The obtention of general expressions for the spectrum valid in arbitrary dimension for $`p>2`$ is in general a complicated task. We were able to do it only in some special cases. For example, for $`p=3`$ with a completely antisymmetric spin tensor one gets a spin $`1/2`$ state whose mass is given by: $`\mathrm{cosh}(M_{\frac{1}{2}})`$ $`=`$ $`r\mathrm{\Xi }_3\pm \sqrt{(\mathrm{\Xi }_3)^2ϵ}`$ $`\mathrm{\Xi }_3`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Theta }_3(\xi )ϵ^2}{2\kappa ^3}}rϵ\sigma ,`$ $`\mathrm{\Theta }_3(x)`$ $``$ $`(1x)^3+{\displaystyle \frac{x^3}{(2d1)^2}},`$ $`\sigma `$ $`=`$ $`{\displaystyle \underset{i}{}}\mathrm{cos}(\phi _i=0,\pi ).`$ (8) Finally, we found that, in the $`(\kappa ,r)`$ plane, there is a special limit where all the states in each $`p`$-gluino sector become degenerate. This limit is obtained by letting $`r\mathrm{}`$ and $`\kappa 0`$ in such a way that the product $`\kappa r`$ is fixed. In this limit the spin-independent $`p`$-gluino mass is given by: $`\mathrm{cosh}(M_p)`$ $`=`$ $`\mathrm{\Phi }\left(\left({\displaystyle \frac{(2d1)(1\xi )}{\xi }}\right)^{p/2}\right)\sigma ,`$ $`\mathrm{\Phi }(x)`$ $`=`$ $`{\displaystyle \frac{1}{2}}\left(x+{\displaystyle \frac{(2d1)}{x}}\right).`$ (9) Masses increase with $`p`$. ## 4 Conclusions The main results of our work are: * As mentioned earlier, the quenched and OZI approximations are exact in this limit. * The pseudoscalar meson is the lightest meson for even space-time dimensions. For odd dimensions the lightest state is the vector meson. The vanishing of their lattice masses marks the critical line. * The only multicritical point where several meson masses vanish corresponds to the point $`\kappa 0`$, $`r\mathrm{}`$ with $`\kappa r=\frac{1}{2\sqrt{2d1}}`$. * For the 3 and 4-dimensional cases, there are no multicritical lines with vanishing 3-gluino masses. The masses of the 3-gluino fermionic states are always positive within the physical region of the $`(\kappa ,r)`$ plane. ## Acknowledgments E.G. acknowledges the financial support of the TMR network project ref. FMRX-CT96-0090.
no-problem/9909/hep-th9909085.html
ar5iv
text
# Untitled Document UU-HEP/99-04 Holography and Noncommutative Yang-Mills Miao Li<sup>1,2</sup> and Yong-Shi Wu<sup>3</sup> <sup>1</sup>Institute of Theoretical Physics Academia Sinica, P.O. Box 2735 Beijing 100080 <sup>2</sup>Department of Physics National Taiwan University Taipei 10764, Taiwan mli@phys.ntu.edu.tw <sup>3</sup>Department of Physics University of Utah Salt Lake City, Utah 84112 USA wu@mail.physics.utah.edu In this note a lately proposed gravity dual of noncommutative Yang-Mills theory is derived from the relations, recently suggested by Seiberg and Witten, between closed string moduli and open string moduli. The only new input one needs is a simple form of the running string tension as a function of energy. This derivation provides convincing evidence that string theory integrates with the holographical principle, and demonstrates a direct link between noncommutative Yang-Mills theory and holography. September 14, 1999 By now it becomes clear that any consistent theory that unifies quantum mechanics and general relativity requires dramatically new ideas beyond what we have been familiar with. Two such ideas, the holographic principle and noncommutative geometry , have recently attracted increasing attention in string theory community. The holographic principle , originally motivated by the area dependence of black hole entropy, asserts that all information on a quantum theory of gravity in a volume is encoded in the boundary surface of the volume. Though this principle seems to conflict with our intuition from local quantum field theory, string theory as a promising candidate of quantum gravity is believed to integrate with it. Indeed the Maldacena conjecture , motivated by the D-brane models of black hole in string theory, is nothing but an embodiment of the holographic principle: There is an equivalence or correspondence between supergravity (or closed string theory) on an anti-de Sitter space, say of five dimensions, and a supersymmetric Yang-Mills gauge theory on its four-dimensional boundary. In a parallel development, Yang-Mills theory on a space with noncommutative coordinates , which we will call noncommutative Yang-Mills theory (NCYM), has been found to arise naturally in string theory, first in the multi-D-brane description , then in the D-string solution in the IIB matrix model , then in matrix theory \[9,,10\] or string theory compactifications with nonvanishing antisymmetric tensor background, and most recently in a special limit that decouples closed string contributions from the open string description for coincident D-branes with a constant rank-2 antisymmetric tensor B-background (see and references therein). Right now it is the last case that is the focus of attention. A rather thorough discussion of the aspects of NCYM from the open string versus closed string perspectives has been given in the work of Seiberg and Witten which, among other things, also clarifies several puzzles previously encountered in NCYM, including the one raised by one of us . Moreover, the supposed-to-be gravity duals of NCYM’s in the decoupling limit, which generalize the usual Maldacena conjecture without B-background, were also constructed \[14,,15\]. One might think that NCYM is relevant only in non-generic situations, such as the above-mentioned decoupling limit in which the B-background is brought to infinity, so it cannot shed much light on deep issues as holography and quantum nature of spacetime. In this letter we show that this is not true. With an observation made on a direct link between the NCYM and its gravity dual, we will try to argue for the opposite: Switching on a B-background allows one to probe the nature of holography with NCYM, and will probably lead to uncovering more, previously unsuspected links between a large-N theory and its closed string dual. One of the central observations in is that the natural moduli to use in open string theory with ends on a set of N coincident D-branes in the presence of a constant B-field are different from those defined for closed strings. The effective action for the system is more elegantly written if one uses the open string metric $`G_{ij}`$ and an antisymmetric tensor $`\theta ^{ij}`$, and their relation to the closed string metric $`g_{ij}`$ and the antisymmetric tensor field $`B_{ij}`$ is $$\begin{array}{cc}\hfill G_{ij}& =g_{ij}(\alpha ^{})^2(Bg^1B)_{ij}\hfill \\ \hfill \theta ^{ij}& =2\pi \alpha ^{}\left(\frac{1}{g+\alpha ^{}B}\right)_A^{ij},\hfill \end{array}$$ where the subscript $`A`$ indicates the antisymmetric part. Our normalization of the B field differs from that in by a factor $`2\pi `$. Seiberg and Witten noted that in the limit $`\alpha ^{}0`$ and $`g_{ij}0`$ (assuming $`B_{ij}`$ is nondegenerate), it is possible to keep $`G_{ij}`$ and $`\theta ^{ij}`$ fixed with a fixed $`B_{ij}`$. The tree level effective action surviving this limit is the noncommutative Yang-Mills action, with a star product of functions defined using $`\theta ^{ij}`$: $$fg(x)=e^{i/2\theta ^{ij}_i^x_j^y}f(x)g(y)_{y=x}.$$ This is equivalent to the noncommutativity: $`x^ix^jx^jx^i=i\theta ^{ij}`$ . The open string coupling constant $`G_s`$, proportional to the Yang-Mills coupling $`g_{YM}^2`$, is also different from the closed string coupling constant. The relation between the two is $$G_s=g_s\left(\frac{detG}{det(g+\alpha ^{}B)}\right)^{1/2}.$$ It is easy to see that in the “double scaling limit” when $`\alpha ^{}0`$, $`g0`$, keeping fixed the open string coupling $`G_s`$ (say for D3-branes), $`g_s`$ must be taken to zero too. Thus closed strings decouple from the open string sector described by the NCYM. We will see that this perfectly matches with the closed string dual description of the NCYM. Using the conventions of , the NS fields in the gravity dual proposed for D3-branes with a constant $`B_{23}`$ are \[14,,15\] $$\begin{array}{cc}\hfill ds_{str}^2& =R^2u^2\left[(dt^2+dx_1^2)+\frac{1}{1+a^4u^4}(dx_2^2+dx_3^2)\right]+R^2\frac{du^2}{u^2}+R^2d\mathrm{\Omega }_5^2,\hfill \\ \hfill B_{23}& =B\frac{a^4u^4}{1+a^4u^4},\hfill \\ \hfill e^{2\varphi }& =g^2\frac{1}{1+a^4u^4},\hfill \end{array}$$ where we have set $`\alpha ^{}=1`$. The constant $`B`$ is the value of $`B_{23}`$ at the boundary $`u=\mathrm{}`$, and the constant $`g`$ is the closed string coupling in the infrared $`u=0`$. Here $`R^4=4\pi gN`$, and the parameter $`a`$ is given by $$Ba^2=R^2.$$ In addition to the usual $`C^{(4)}`$ induced by the presence of D3-branes, there is also an induced $`C^{(2)}`$ field. Its presence is quite natural for D3-branes. Recall that a constant $`B_{23}`$ on the branes can be replaced by a constant magnetic field. Performing S-duality transformation, this field becomes the electric field $`E=F_{01}`$. This electric field is defined using the dual quanta, thus it is equivalent to a $`C_{01}`$. The u-dependent $`C_{01}`$ is given in : $$C_{01}=\frac{a^2R^2}{g}u^4.$$ It is natural to interpret the fields appearing in the gravity dual (1) as closed string moduli. Note that apart from the $`u^2`$ factor, there is an additional factor $`1/(1+a^4u^4)`$ in the closed string metric on the plane $`(x_2,x_3)`$. Thus if one is to hold the geometry on the plane $`(t,x_1)`$ fixed, then the geometry on the plane $`(x_2,x_3)`$ shrinks when the boundary is approached. By the UV/IR relation , this means that the closed string metric shrinks to zero in the UV limit from the open string perspective. Here comes our central observation. We identify the UV limit as the ‘‘double scaling limit’’ of , thus that $`g_{ij}`$ shrinks in the UV limit is quite natural. In this limit, $`\alpha ^{}`$ must also approach zero. This is just right in the AdS/CFT correspondence . Note that there is an overall factor $`R^2u^2`$ in (1) for the 4D geometry along D3-branes. This redshift factor can be interpreted as the effective string tension $$\alpha _{eff}^{}=\frac{1}{R^2u^2}.$$ Therefore $`\alpha _{eff}^{}`$ also approaches zero in the UV limit. The manner in which it approaches zero compared to $`g_{ij}`$ agrees with the limit taken in . Note that here we differ from the philosophy of in which $`\alpha ^{}`$ itself is taken to zero, while we have set it to be $`1`$. Now we are ready to derive the NS fields in the closed string dual (1) by applying formulas (1) and (1). The way in which Seiberg and Witten derived these formulas is valid if we treat strings as effective strings at a fixed energy scale when loop effects are included. Thus we can take these formulas as giving relations among the open string moduli and the closed string moduli at a fixed cut-off $`E=u`$. Bigatti and Susskind argued that in the large N limit, the effective action of NCYM can be obtained by replacing the usual product in the effective action of $`𝒩=4`$ SYM by the star product. This in particular implies that there is no renormalization for the open string metric, the Yang-Mills coupling constant $`G_s`$ and the noncommutative moduli $`\theta ^{ij}`$. Now with $`\alpha ^{}`$ replaced by $`\alpha _{eff}^{}`$ at a fixed energy scale, the closed string moduli are renormalized. Due to the rotational symmetry on the $`(x_2,x_3)`$ plane, we introduce ansatz $$g_{ij}=f(u)\delta _{ij},B_{ij}=h(u)ϵ_{ij}.$$ The first equation in (1) yields $$\delta _{ij}=\delta _{ij}\left(f+h^2f^1/(R^4u^4)\right),$$ or $$f^2+\frac{h^2}{R^4u^4}=f.$$ The second equation in (1) leads to $$2\pi \frac{1}{R^2u^2}\left(f^2+\frac{h^2}{R^4u^4}\right)^1\frac{h}{R^2u^2}ϵ_{ij}=2\pi \frac{a^2}{R^2}ϵ_{ij},$$ where we used the fact that $`\theta ^{ij}`$ is not renormalized and is given by $`(2\pi /B)ϵ_{ij}`$, and $`B=R^2/a^2`$. This second equation is just $$h=a^2R^2u^4\left(f^2+\frac{h^2}{R^4u^4}\right).$$ Combined with eq.(1) we have $`h=a^2R^2u^4f`$, and substitute this into eq.(1) we find $$f(u)=\frac{1}{1+a^4u^4}.$$ This is precisely what appeared in (1) which is obtained as a solution to classical equations of motion in closed string theory. With $`h=a^2R^2u^4f`$ we find $$h(u)=\frac{a^2R^2u^4}{1+a^4u^4}=B\frac{a^4u^4}{1+a^4u^4},$$ also agreeing with (1). Substitute the above solution into (1) with the identification $`G_s=g`$, the energy dependent closed string coupling is also solved $$g_s=g\left(det(g+\alpha _{eff}^{}B)\right)^{1/2}=g(1+a^4u^4)^{1/2}.$$ Again, this agrees with (1). The closed string coupling becomes weaker and weaker in the UV limit. Although this UV asymptotic freedom appears in the closed string dual, it does not mean that there is asymptotic freedom in NCYM, as we have seen that the Yang-Mills coupling is not renormalized in the large N limit. Having shown that the rather ad hoc looking closed string background is naturally a solution to (1) and (1), one still has room to doubt whether this is a coincidence. To check that our procedure is indeed a correct one, we turn to the case when both $`B_{01}`$ are $`B_{23}`$ are turned on. The solution is also found in \[14,,15\]. We use the Euclidean signature for all coordinates. In such a case, both the geometry of $`(t,x_1)`$ and the one of $`(x_2,x_3)`$ shrink at the boundary in the similar manner. Let $`a`$ be defined as before, and $`a^{}`$ related to $`B_{01}`$ in the same way as $`a`$ is related to $`B_{23}`$. We need not to repeat the above steps in deriving the metric and the B field, since these fields are block digaonalized and so we will have the similar results. The closed string coupling is given by, in this case $$g_s=g\left(det(g+\alpha _{eff}^{}B)\right)^{1/2}=g(1+a^4u^4)^{1/2}(1+a^4u^4)^{1/2},$$ where the determinant is taken of the matrix including all components. This result agrees with the classical solution in \[14,,15\]. Other part of the closed string metric can not be reproduced so simply. Due the result of Bigatti and Susskind, and the unbroken R-symmetry $`SO(6)`$, it must be identical to that in the AdS case without B field background. We see that the relations among the closed string moduli and the open string moduli contain much more than we could have imagined. With the input $`\alpha _{eff}^{}`$, they determine the closed string dual of NCYM! We venture to conjecture that this is a quite general fact. It can be applied for instance to other Dp-brane case with constant B field, and perhaps other backgrounds. Also, we expect that $`1/N`$ corrections will at least renormalize the Yang-Mills coupling. The relation (1) likely holds in this case, thus the closed string coupling must be renormalized by the $`1/N`$ effects. We now apply our procedure to Dp-branes with constant B field. Switching on $`B_{23}`$ only, the solution in this case is given in . This solution is similar to (1). For the shrinking metric, we replace $`1/(1+a^4u^4)`$ by $`1/(1+(au)^{7p})`$, where $`a`$ is determined by $`B^2a^{7p}=R^{7p}`$. The overall factor $`(Ru)^2`$ in (1) is replaced by $`(Ru)^{(7p)/2}`$. The u-dependent B field is $$B_{23}=B\frac{(au)^{7p}}{1+(au)^{7p}},$$ and the dilaton field is $$e^{2\varphi }=g^2u^{(7p)(p3)/2}\frac{1}{1+(au)^{7p}}.$$ To obtain this solution from (1), we need to use $`\alpha _{eff}^{}=(Ru)^{(p7)/2}`$. Introduce the same ansatz for $`g_{ij}`$ and $`B_{ij}`$ as before, the two equations in (1) combine to yield $$h=(aR)^{(7p)/2}u^{7p}f.$$ Substitute this into either equation in (1) we obtain $$f=\frac{1}{1+(au)^{7p}}.$$ This together with (1) results in the correct answer for the B field. Relation (1) determines $$g_s^2=G_s^2f.$$ To agree with (1) we must have $`G_s^2=g^2u^{(7p)(p3)/2}`$. This just means that the open string coupling runs in the same way as in the case when there is no B field. This fact certainly agrees with the result of in the large N limit. Our derivation of the closed string dual from NCYM is based on the idea that at the dynamical level, namely when the quantum effects in NCYM are included by renormalization down to fixed energy scale $`E=u`$, the relations among the open string moduli and closed string moduli in view of effective strings should be the one derived at the tree level, as done in . Although this idea may find its root in some previous known facts, such as Fischler-Susskind mechanism, and Polyakov’s introduction of Liouville field to mimic quantum effects in QCD, we have not found a similar precise statement in the literature. To complete our derivation of the closed string dual in the D3-brane case, we still need to explain the effective string tension, namely $`\alpha _{eff}^{}=1/(R^2u^2)`$. As we explained, this relation is natural in the supergravity side of the original AdS/CFT correspondence. We are yet to understand it directly in the gauge theory. If we measure energy with time $`t`$ in the gauge theory with metric component $`g_{00}=1`$, we identify $`E=u`$. In SYM without noncommutative parameters, this is the only scale, thus $`\alpha _{eff}^{}1/E^2=1/u^2`$. Yet we have to attribute the coefficient $`1/R^2`$ to quantum effects. Back to NCYM, there are two scales, one of which is determined by $`\theta `$, another energy scale. According to the result of , the large N string tension is the same as in ordinary SYM, thus $`\alpha _{eff}^{}=1/(Ru)^2`$ in NCYM too. We have left out the derivation of the metric $`du^2/u^2`$ in the induced dimension. This together with other parts of the metric induces an anomaly term in the special conformal transformation. This term is computed in SYM in at the one loop level, providing a satisfactory interpretation of the full AdS metric. Although NCYM is not conformally invariant, we expect that a similar calculation to that in can be done and thus justifies $`du^2/u^2`$. The fact that the geometry on $`(x_2,x_3)`$ shrinks toward the boundary when $`B_{23}0`$ has been source of confusions. Our scheme clarifies this issue. The geometry in the supergravity dual is not to be confused with the geometry in NCYM. The latter remains fixed at all energy scales, while the geometry felt by closed strings becomes degenerate in the UV limit, as it ought to be according to . There remain quite a few puzzles to be understood within our scheme. We need to understand why and how the GKPW prescription works, and why the calculation of the force between a pair of heavy quark and anti-quark requires a different prescription . Among other things, it can be shown that there does not exist a nontrivial geodesic connecting two points on the boundary and separated in $`x_2`$. This already indicates interesting behavior of correlation functions at the two point function level. We leave these issues for future study. In conclusion, we have observed that the moduli in the gravity duals of NCYM should be understood as closed string moduli, and they can be reproduced from the Seiberg-Witten relations (1) between open and closed string moduli, once the string tension in these relations is understood as a running one with simple energy dependence as given in (1). This observation not only helps clarify a few puzzles in understanding the gravity dual of NCYM but also, more importantly, demonstrates a simple and direct connection between NCYM and the holographic principle, either of which is believed to play a role in the ultimate theoretical structure for quantum gravity. In particular, the existence of such a link between NCYM and holography seems to indicate the fundamental significance of noncommutativity in space or spacetime. (For another perspective pointing to the same direction, see .) To search for appropriate geometric framework for the nonperturbative formulation of string/M theory, further exploring the connections between these two themes should be helpful. For instance, it would be interesting to consider further deformation of NCYM, to see what it will lead to as the holographic image on boundary for a closed string dual in bulk. ML acknowledges warm hospitality from Department of Physics, University of Utah. The work of YSW was supported in part by U.S. NSF through grant PHY-9970701. References relax G. ’t Hooft, “Dimensional Reduction in Quantum Gravity,” hep-th/931026. relax A. Connes. “Noncommutative Geometry,” Academic Press, Inc.; New York, 1994. relax L. Susskind, “The world as a hologram ,” hep-th/9409089. relax J.M. Maldacena, “The Large N Limit of Superconformal Field Theories and Supergravity,” hep-th/9711200. relax E. Witten, ”Anti De Sitter Space and Holography,” hep-th/9802150; S. Gubser, I. Klebanov and A. Polyakov, Phys. Lett. B428 (1998) 105. relax A. Connes and M. Rieffel,“Yang-Mills For Noncommutative Two-Tori,” in Operator Algebras and Mathematical Physics (Iowa City, Iowa, 1985), pp. 237 Contemp. Math. Oper. Alg. Math. Phys. 62, AMS 1987. relax P.M. Ho and Y.S. Wu, “Noncommutative Geometry and $`D`$-branes,” Phys. Lett. B398 (1997) 52, hep-th/9611233. relax M. Li, ”Strings from IIB Matrices”, hep-th/9612222. relax A. Connes, M. R. Douglas, and A. Schwarz, “Noncommutative Geometry and Matrix Theory: Compactification On Tori,” JHEP 9802:003 (1998), hep-th/9711162. relax P.M. Ho and Y.S. Wu, “Noncommutative Gauge Theories in Matrix Theory,” Phys. Rev. D58 (1998) 066003, hep-th/9801147. relax M. R. Douglas and C. Hull, “$`D`$-Branes And The Noncommutative Torus,” JHEP 9802:008,1998, hep-th/9711165. relax M. Li, “Comments on Supersymmetric Yang-Mills Theory on a Noncommutative Torus,” hep-th/9802052. relax N. Seiberg and E. Witten, ”String Theory and Noncommutative Geometry,” hep-th/9908142. relax A. Hashimoto and N. Itzhaki, “Noncommutative Yang-Mills and the AdS/CFT correspondence,” hep-th/9907166. relax J.M. Maldacena and J.G. Russo, “Large N Limit of Non-Commutative Gauge Theories,” hep-th/9908134. relax C.-S. Chu and P.-M. Ho, Nucl. Phys. B550 (1999) 151, hep/th9812219; V. Schomerus, JHEP 9906:030 (1999), hep-th/9903205; F. Ardalan, H. Arfaei and M.M. Sheikh-Jabbari, JHEP 02, 016 (1999) hep-th/981007. relax L. Susskind and E. Witten, “The Holographic Bound in Anti-de Sitter Space,” hep-th/9805114. relax D. Bigatti and L. Susskind, “Magnetic Fields, Branes and Noncommutative geometry,” hep-th/9908056. relax A. Jevicki, Y. Kazama and T. Yoneya, Phys. Rev. Lett. 81 (1998) 5072. relax M. Li and T. Yoneya, hep-th/9806240, in a special issue of ”Chaos, Solitons and Fratals,” 1998.
no-problem/9909/astro-ph9909228.html
ar5iv
text
# The BeppoSAX view of the hot cluster Abell 2319 ## 1. Introduction Abell 2319 (hereafter A2319) is a rich cluster of galaxies located at a redshift of z=0.056. Dynamical studies in the optical band (see Oegerle et al. 1995, and references therein), have shown that the cluster is actually composed of 2 main clumps. The main cluster and a subcluster localized some 10 NW of the cD galaxy and behind the main cluster. At radio wavelengths A2319 is permeated by a radio halo (Feretti, Giovannini & Böhringer 1997, hereafter F97), approximatively oriented in the NE-SW direction. In the X-ray band a temperature of $``$10 keV was measured with the Einstein MPC (David et al. 1993). Analysis of the ROSAT HRI image (F97, Peres et al. 1998) shows that A2319 does not have the strongly peaked radial profile typical of cooling flow clusters. A temperature map of A2319 has been obtained from ASCA data (Markevitch 1996). Recently, Irwin, Bregman & Evrard (1999) have used ROSAT PSPC data to search for temperature gradients for a sample of galaxy clusters including A2319. While the ASCA measurements show a radial temperature gradient (the temperature is about a factor of 2 smaller at about 2 Mpc than at the core), the ROSAT measurement is consistent with a flat temperature profile. In this Letter we use BeppoSAX data to perform an independent measurement of the temperature map of A2319. We also present the first abundance map of A2319 and the first measurement of the hard (13-50 keV) X-ray spectrum of A2319. The outline of the Letter is as follows. In section 2 we give some information on the BeppoSAX observation of A2319 and on the data preparation. In section 3 we present the analysis of the broad band spectrum (2-50 keV) of A2319. In section 4 we present spatially resolved measurements of the temperature and metal abundance. In section 5 we discuss our results and compare them to previous findings. Throughout this paper we assume H<sub>o</sub>=50 km s<sup>-1</sup>Mpc<sup>-1</sup> and q<sub>o</sub>=0.5. ## 2. Observation and Data Preparation The cluster A2319 was observed by the BeppoSAX satellite (Boella et al. 1997a) between the 16<sup>th</sup> and the 17<sup>th</sup> of May 1997. We discuss here data from two of the instruments onboard BeppoSAX: the MECS and the PDS. The MECS (Boella et al. 1997b, see Chiappetti et al. 1998 for a particularly attractive interactive presentation) is presently composed of two units, working in the 1–10 keV energy range. At 6 keV, the energy resolution is $`8\%`$ and the angular resolution is $``$0.7 (FWHM). The PDS instrument (Frontera et al. 1997) is a passively collimated detector (about 1.5$`\times `$1.5 degrees f.o.v.) working in the 13–200 keV energy range. Standard reduction procedures and screening criteria have been adopted to produce linearized and equalized event files. Both MECS and PDS data preparation and linearization was performed using the Saxdas package under Ftools environment. The effective exposure time of the observation was 3.8$`\times `$10<sup>4</sup> s (MECS) and 2.0$`\times `$10<sup>4</sup> s (PDS). The observed countrate for A2319 was 0.908$`\pm `$0.006 cts/s for the 2 MECS units and 0.58$`\pm `$0.04 cts/s for the PDS instrument. All spectral fits have been performed using XSPEC Ver. 10.00. Quoted confidence intervals are 68$`\%`$ for 1 interesting parameter (i.e. $`\mathrm{\Delta }\chi ^2=1`$), unless otherwise stated. ## 3. Broad Band Spectroscopy We have extracted a MECS spectrum from a circular region of 16 radius (1.4 Mpc) centered on the emission peak. From the ROSAT PSPC radial profile (see F97), we estimate that about 90$`\%`$ of the total cluster emission falls within this radius. The background subtraction has been performed using spectra extracted from blank sky event files in the same region of the detector as the source. The PDS background-subtracted spectrum has been produced by plain subtraction of the “off-” from the “on-source” spectrum. The spectra from the two instruments have been fitted simultaneously, with an optically thin thermal emission model (MEKAL code in the XSPEC package), absorbed by a galactic line of sight equivalent hydrogen column density, $`N_H`$, of 7.85$`\times 10^{20}`$ cm<sup>-2</sup>. A numerical relative normalization factor among the two instruments has been added to the spectral fit. The reason is two-fold: a) the BeppoSAX instrument response matrices employed in this Letter (September 1997 release) exhibit slight mismatches in the absolute flux calibration; b) the PDS instrument field of view (1.3 degrees FWHM) covers the entire emission from the cluster, while the MECS spectrum includes emission out to 1.4 Mpc from the X-ray peak. Taking into account the mismatch in the absolute flux calibration, the vignetting of the PDS instrument and the fraction of the emission falling outside of the MECS extraction region, we estimate a normalization factor of 0.76. In the fitting procedure we allow the normalization value to vary within 15$`\%`$ from the above value to account for the uncertainty in this parameter. The MEKAL model is found to fit the data adequately ($`\chi ^2=`$ 183 for 164 d.o.f.). The best fitting values for the temperature and the metal abundance are respectively 9.6$`\pm `$0.3 keV and 0.25$`\pm `$0.03, where the latter value is expressed in solar units. The PDS data shows no evidence of a hard X-ray excess. However, we can derive a lower limit to the volume-averaged intracluster magnetic field, B, responsible of the diffuse radio emission located in the central region of the cluster. The radio halo spectrum shows an index $`\alpha _r`$ $``$0.92 in the (408-610) MHz frequency range and $``$2.2 in the range (610-1400) MHz; the radio flux is $``$1 Jy at 610 MHz (F97). From the PDS data we can place a 90$`\%`$ confidence upper limits of 2.3$`\times `$10<sup>-11</sup> erg cm<sup>-2</sup> s<sup>-1</sup> and 2.0$`\times `$10<sup>-11</sup> erg cm<sup>-2</sup> s<sup>-1</sup> for a power-law spectrum with energy index 0.92 and 2.2, respectively. Relating the synchrotron radio halo flux to the X-ray flux upper limits, assuming inverse Compton scattering of relativistic electrons with the 3K background photons, we determine lower limits of B of $``$0.04 $`\mu `$G and $``$0.035 $`\mu `$G, respectively. The equipartition magnetic field is estimated to be 0.48 $`\mu `$G (F97). We determine also upper limits to the energy density of the emitting electrons of $``$1.4$`\times 10^{12}`$ erg cm<sup>-3</sup> and 9.3$`\times 10^{12}`$ erg cm<sup>-3</sup> for $`\alpha _r=0.92`$ and 2.2, respectively, using a size of $``$ 0.66 Mpc in radius for the radio halo. ## 4. Spatially Resolved Spectral Analysis A proper analysis of extended sources requires that the spectral distortions introduced by the energy dependent PSF be correctly taken into account. In the case of the MECS instrument onboard BeppoSAX the PSF, which is the convolution of the telescope PSF with the detector PSF, is found to vary only weakly with energy (D’Acri, De Grandi & Molendi 1998). This lack of a strong chromatic aberration results from the fact that the telescope PSF degradation with increasing energy is approximatively balanced by the improvement of the detector spatial resolution. Though we expect spectral distortions to be small, we have taken them into account using the Effarea program publicly available within the latest Saxdas release. The Effarea program convolves the ROSAT PSPC surface brightness with an analytic model of the MECS PSF to estimate the spectral distortions. A more extensive description of the method may be found in D’Acri, De Grandi & Molendi (1998). The Effarea program also includes corrections for the energy dependent telescope vignetting, which are not discussed in D’Acri et al. (1998). The Effarea program produces effective area files, which can be used to fit spectra accumulated from annuli or from sectors of annuli. ### 4.1. Radial Profiles We have accumulated spectra from 6 concentric annular regions, with inner and outer radii of 0-2, 2-4, 4-6, 6-8, 8-12 and 12-16. The background subtraction has been performed using spectra extracted from blank sky event files in the same region of the detector as the source. For the 5 innermost annuli the energy range considered for spectral fitting was 2-10 keV, while for the outermost annulus, due to the strong contribution of the instrumental background in the 8-10 keV band, the fit was restricted to the 2-8 keV range. The ROSAT PSPC and HRI images of A2319 (F97) show excess emission, with respect to a radially symmetric profile, in the NW and NE direction. Although the resolution of the MECS image (see figure 2) is considerably poorer than that of the ROSAT images, evidence of the excess is seen also in our data. The excess emission in the NW direction is most likely associated to a subcluster identified from velocity dispersion measurements (Oegerle et al. 1995), while the structure in the NE direction is coincident with diffuse radio emission observed at 20 cm (F97). To avoid possible contaminations to the radially averaged spectra we have excluded data from the NW and NE sectors of the third and fourth annuli. An analysis of the excluded sectors is presented in the next subsection. We have fitted each spectrum with a MEKAL model absorbed by the galactic $`N_H`$ of 7.85$`\times 10^{20}`$ cm<sup>-2</sup>. In figure 1 we show the temperature and abundance profiles obtained from the spectral fits. The average temperature and abundance for A2319 are found to be respectively: 9.7$`\pm `$0.3 keV and 0.30$`\pm `$0.03, solar units. The temperature profile is flat, with no indication of a temperature decline with increasing radius. A reduction of the temperature of 1/3, when going from the cluster core out to 16 (1.4 Mpc), can be excluded in the present data at the 99$`\%`$ confidence level. The abundance profile is consistent with being flat, although the large error associated to the outermost annulus prevents us from excluding variations in this region. An abundance enhancement of a factor 2 in the innermost region can be excluded in these data at more than the 99.99$`\%`$ level. We have used the Fe K<sub>α</sub> line as an independent estimator of the ICM temperature. We recall that for temperatures larger than a few keV the Fe line comes mostly from the He-like Fe line at 6.7 keV, and the H-like Fe line at 7.0 keV. As the temperature increases the contribution from the He-like Fe line decreases while the contribution from the H-like Fe line increases, thus the intensity ratio of the He-like Fe line to the H-like Fe line can be used to estimate the temperature. The MECS instrument does not have sufficient spectral resolution to resolve the 2 lines, but it can be used to determine the centroid of the observed line, which depends on the relative contribution of the He-like and H-like lines to the observed line and therefore on the gas temperature. The position of the centroid of the Fe K<sub>α</sub> line is essentially unaffected by the spectral distortion introduced by the energy dependent PSF and it depends only weakly on the the adopted continuum model. Thus it allows us to derive an independent and robust estimate of the temperature profile. Considering the limited number of counts available in the line we have performed the analysis on 2 annuli with bounding radii, 0-8 and 8-16. We have fitted each spectrum with a bremsstrahlung model plus a line, both at a redshift of z=0.056 (ZBREMSS and ZGAUSS models in XSPEC), absorbed by the galactic $`N_H`$. A systematic negative shift of 40 eV has been included in the centroid energy to account for a slight misscalibration of the energy pulseheight channel relationship near the Fe line. To convert the energy centroid into a temperature we have derived an energy centroid vs. temperature relationship. This has been done by simulating thermal spectra, using the MEKAL model and the MECS response matrix, and fitting them with the same model, which has been used to fit the real data. In figure 1 we have overlaid the temperatures derived from the centroid analysis on those previously obtained through the thermal continuum fitting. The two measurements of the temperature profile are in agreement with each other. The moderate statistics available in the line does not allow us to place very tight constrains on temperature gradients. ### 4.2. Maps We have divided A2319 into 4 sectors: NW, including the subcluster located behind A2319, SW, SE and NE. Each sector has been divided into 3 annuli with bounding radii, 2-4, 4-8 and 8-16. In figure 2 we show the MECS image with the sectors overlaid. The background subtraction has been performed using spectra extracted from blank sky event files in the same region of the detector as the source. We have fitted each spectrum with a MEKAL model absorbed by the galactic $`N_H`$. In figure 3 we show the temperature profiles obtained from the spectral fits for each of the 4 sectors. In all the profiles we have included the temperature obtained for the central region with radius 2. Fitting each radial profile with a constant we derive the following average sector temperatures: 9.8$`\pm `$0.4 keV for the NW sector, 9.7$`\pm `$0.5 keV for the SW sector, 9.1$`\pm `$0.5 keV for the SE sector and 10.1$`\pm `$0.5 keV for the NE sector. All sector averaged temperatures are consistent with the average temperature for A2319 derived in the previous subsection. The $`\chi ^2`$ values derived from the fits indicate that all temperature profiles are consistent with being flat. The temperature of the third annulus (bounding radii 4-8) of the NE sector is found to be higher than the average cluster temperature at a significance level of $`95\%`$. No significant temperature decrement is found in the third annulus (bounding radii 4-8) of the NW sector, where the subcluster is localized. Since this region includes the emission of the subcluster as well as a fare amount of emission by A2319, we have accumulated a spectrum from a smaller sector (see figure 2), centered on the subcluster evidenced in figure 2. For this region we find a temperature of 6.9$`{}_{1.0}{}^{}{}_{}{}^{+1.2}`$ keV indicating that the subcluster may have a somewhat lower temperature than A2319. In figure 4 we show the abundance profiles for each of the 4 sectors. In all profiles we have included the abundance obtained for the central region with bounding radius 2. Fitting each radial profile with a constant we derive the following sector averaged abundances: 0.29$`\pm `$0.04 for the NW sector, 0.29$`\pm `$0.05 for the SW sector, 0.30$`\pm `$0.04 for the SE sector and 0.27$`\pm `$0.04 for the NE sector. All sector averaged abundances are consistent with the average abundance for A2319 derived in the previous subsection. The $`\chi ^2`$ values derived from the fits indicate that all abundance profiles are consistent with being constant. The abundance of the third annulus (bounding radii 4-8) of the NE sector is found to be smaller than the average cluster abundance at a significance level of $`95\%`$. The abundance of the subcluster estimated from the third annulus (bounding radii 4-8) of the NW sector 0.22$`\pm `$0.07, or from the smaller sector, shown in figure 2, 0.28$`\pm `$0.11, is consistent with the average abundance of A2319. ## 5. Discussion Previous measurements of the temperature structure of A2319 have been performed by Markevitch (1996), using ASCA data, and by Irwin et al. (1999), using ROSAT PSPC data. The average temperature, 10.0$`\pm `$0.7 keV, and abundance 0.3$`\pm `$0.08, reported by Markevitch are in good agreement with those presented in this work. The temperature profile presented by Markevitch shows that the cluster is isothermal, kT $``$ 11 keV, within 10 (0.9 Mpc) from the core (however a slightly cooler spot associated to the NW structure is found in this region), the temperature in the annulus with bounding radii 10-20 (0.9-1.8 Mpc) is smaller kT $``$ 8 keV, finally in the outermost bin, 20-24 (1.8-2.1 Mpc), the temperature falls to about 4 keV, however the author remarks that this last point should be considered less reliable than the others. The BeppoSAX data shows no evidence of a temperature decline within 16 (1.4 Mpc) from the cluster core. A reduction of the temperature of 1/3, when going from the cluster core out to 16, can be excluded at the 99$`\%`$ confidence level. To compare the ASCA measurement with ours, we have converted the 90$`\%`$ confidence errors reported in figure 2 of Markevitch (1996) into 68$`\%`$ confidence errors by dividing them by 1.65 (this is the factor that Markevitch & Vikhlinin 1997 apply to the 68$`\%`$ confidence errors of Briel & Henry 1994 to convert them to 90$`\%`$ errors). When we compare the individual data points covering the same radial range we find that the temperature differences are always significant at less than the 90$`\%`$ level. To compare the profiles we have derived the best fitting linear relationship for the ASCA profile and compared it with our data finding a $`\chi ^2`$ of 21.9 for 6 d.o.f. (probability of 1.3$`\times 10^3`$). We have also compared the best fitting linear relation from our profile to the ASCA profile finding a $`\chi ^2`$ 12.4 for 3 d.o.f (probability of 6.1$`\times 10^3`$). Therefore, it seems that the two profiles are incompatible with one another. Recently Irwin, Bregman & Evrard (1999) have used ROSAT PSPC hardness ratios to measure temperature gradients for a sample of nearby galaxy clusters, which includes A2139. From their analysis they conclude that A2319 is isothermal out to 18 from the cluster core. From the analysis of the temperature map we find an indication of a smaller than average temperature, 6.9$`{}_{1.0}{}^{}{}_{}{}^{+1.2}`$ keV , at the position of the subcluster. A similar result, was also found in Markevitch (1996). The average abundance of A2319 has been previously measured using ASCA data, by Markevitch (1996), who finds 0.3$`\pm `$0.08, by Fukazawa et al. (1998), who find 0.17$`\pm `$0.03 for Fe and 0.46$`\pm `$0.54 for Si, and by Allen & Fabian (1998), who find 0.33$`\pm `$0.06. Our measurement, like Markevitch’s, given the adopted spectral ranges (E$`>`$2.0 keV and E$`>`$2.5 keV respectively) is essentially an Fe abundance measurement. Our results are in agreement with the measurements by Markevitch (1996) and by Allen & Fabian (1998) and in disagreement with the one presented by Fukazawa et al. (1998). In this Letter we present, for the first time, abundance profiles and maps for A2319. The radial abundance profile is flat. A2319 seems to conforms to the general rule that non-cooling flow clusters do not present abundance enhancements in their core. In the third annulus (bounding radii 4-8) of the NE sector we find evidence of a temperature increase and of an abundance decrease, with respect to the average values, both significant at about the 95$`\%`$ level. F97, from the analysis of ROSAT PSPC and HRI images, find evidence of excess emission in this region. The same authors, from the analysis of the 20 cm radio map of A2319, find also evidence of diffuse radio emission in this region. They argue that the presence of the X-ray and radio structure may be the result of an on going merger event in the NE direction. Our measurement of a temperature increase in the region supports their conjecture. Indeed simulations show that clusters undergoing a merger event should experience a temperature enhancement in the merger region (e.g., Schindler & Müller 1993). The detection of an abundance decrease in the same region may indicate that the subcluster merging with A2319 is poor in metals. We acknowledge support from the BeppoSAX Science Data Center. We thank G. Zamorani for a critical reading of the manuscript.
no-problem/9909/cond-mat9909017.html
ar5iv
text
# MODIFICATION OF 1D BALLISTIC TRANSPORT USING AN ATOMIC FORCE MICROSCOPE ## 1 Introduction Recent advances in low temperature Scanning Probe Microscopy (SPM) technology and the availability of high mobility 2DEGs has made possible the imaging of several quantum phenomena. Images of 2DEG electron compressability in the quantum Hall state have been produced , and a single electron transistor (SET) has been manufactured on a glass tip to detect static charge . The charged tip AFM has been used to locally perturb the 2DEG electrostatic potential, to cause electron backscattering through a 1D channel, and so image the ballistic electron flux . Numerical calculations of the electric field from a conical tip have shown that the half maximum perturbation occurs at a radial distance approximately equal to the 2DEG depth , which limits the spacial resolution of this technique. ## 2 Experiment In this paper we present images produced by recording the conductance or transconductance through a two terminal 1D ballistic channel, as a charged AFM tip is scanned over the channel region. The 1D channel was created in a 2DEG at a GaAs/AlGaAs heterojunction 98 nm beneath the surface, with a $`12\times 10^{18}`$ cm<sup>-3</sup> Si doped layer from 40 nm to 80 nm above the 2DEG. From Shubnikov-deHaas measurements the carrier concentration was calculated as $`2.4\times 10^{11}`$ cm<sup>-2</sup> with an electron mean free path of $`25\mu `$m. The 1D channel was defined by locally depleting electrons from the 2DEG beneath negatively biased 700$`nmwidesplitgatesurfaceelectrodes.Thesurfaceelectrodesextend30`$nm above the GaAs surface, and were manufactured using e-beam technology. Our AFM operates from room temperature to 1.5 K, and uses a piezoresistive tip because of the complications of making optical measurements of deflection at low temperatures with light sensitive devices. The AFM operates in the conventional topographic mode to locate the split gate region. During the measurements the AFM force feedback is disconnected and the tip scans a constant height above the surface. We use two experimental configurations to produce images which are referred to as conductance images and transconductance images. In both configurations a lock-in amplifier measures the ac zero phase component of the channel drain current. For conductance images the ac voltage signal was connected to the channel source with a dc bias applied to the conductive AFM tip. For transconductance images the ac signal was connected to the tip with a dc bias applied to the channel source. The channel conductance is a function of $`V_{gate}`$, so for useful transconductance images we operate within a range of G where $`dG/dV_{gate}`$ remains almost constant. ## 3 Results and discussion ### 3.1 Images of the tip perturbing potential In fig. 1 we present conductance images where the tip scanned a constant 60nm above the surface. In (b) a $`+2.5`$ V bias was applied to the tip while in (c) $`2.5`$ V was applied, with a 0.1 mV ac signal applied to the channel source. The channel conductance is a sensitive probe of the electrostatic potential , so these images reveal the tip perturbing potential at the 2DEG layer. Fig. 1 (a) shows a gate sweep made with the tip away from the surface, where the dotted lines correspond to the conductance range seen in (c). If we model the tip as a spherical charge of radius $`r_{tip}`$ then at a distance $`d`$ below the tip we have $`UV_{tip}/\sqrt{\rho ^2+(d+r_{tip})^2}`$ where $`\rho `$ is the radial distance in the 2DEG plane. Fitting this equation to the y-direction single sweep shown in (d), which is indicated by the arrow in conductance image (c), provides a good fit with $`d+r_{tip}=230`$ nm. Including a correction for the change in relative permittivity, we obtain $`r_{tip}=127`$ nm which is in agreement with observations from topological images. ### 3.2 Measurement of charge density across the channel width In fig. 2 (a) we present a contour plot which relates to the charge density across the width of a quantised 1D channel. The AFM was positioned at a series of points across the width of the channel (x axis), and at each point the transconductance was recorded (z axis) while the gate bias was swept from $`2`$ V to channel pinch off (y axis). The series of points passed through the channel centre which was located using images like those of fig. 1. The data of fig. 2 (a) has been smoothed in the y direction to remove small steps caused by switching. The wavefunctions, and charge density, for the infinite 1D channel with parabolic confinement are well known and shown in fig. 2 (b), where the spacing between the charge density peaks is roughly equal to half the electron wavelength $`\lambda `$. When we introduce the tip perturbing potential to modify the confining potential, the energy levels change by $`\delta E`$ as shown in (c) for a deep 2DEG, and in (d) for a shallow 2DEG, as a function of tip position $`x_{tip}`$ across the width of the channel. Note that the peaks in the charge density plots also appear in the same positions on the corresponding $`\delta E`$ plots, but convolved with the curve of the perturbing potential. The significance of the charge density peaks depends on the width of the perturbing potential, and the model predicts that the $`n=1`$ peaks will no longer be observed when $`d>1.2\left(m\omega _0/\mathrm{}\right)^{0.5}`$ or $`d>0.3\lambda `$, although increased broadening for higher sub-bands is still predicted. An ac signal is applied to the tip which oscillates the energy levels. Each energy level determines the gate voltage of the corresponding transition between 1D plateaus. With $`V_{gate}`$ kept constant, the measured transconductance is proportional to a product of the conductance against $`V_{gate}`$ gradient and $`\delta E`$. Alternatively, the charge density may modulate the capacitive coupling between the tip and the 1DEG to produce a similar response in transconductance. The distinct peaks in the y-direction of (a) are due the transconductance of 1D quantisation where a peak corresponds to a transition between 1D sub-bands and a trough to a 1D plateau. On top of the distinct peaks we observe weak oscillations in the x-direction where one, two, and three peaks are seen for corresponding transitions of up to one, two, and three sub-bands, which we interpret as a measure of the charge density across the width of a quantised 1D channel. The $`n=1`$ peaks are separated by approximately 100 nm, giving an electron wavelength larger than the 2DEG Fermi wavelength due to the reduced electron effective energy and a finite longitudinal momentum in the channel. ### 3.3 Images of switching states Switching between stable states in semiconductor devices is frequently observed as a function of time and is known as a Random Telegraph Signal (RTS). When the time constant of the measurement is longer than the switching period a time average of the RTS is measured, which can cause a small ‘plateau feature’ seen in conductance during gate bias sweeps of 1D channels . A RTS observed in channel conduction is believed to be caused by the probabilistic occupation of a nearby defect state by an electron originating in the 2DEG, which then modifies the channel conduction by electrostatics. Fig. 3 (a) and (d) show gate sweeps where the dotted lines indicate the conductance range of the corresponding conductance images (b) and (e). Within the conductance range a small ‘plateau’ can be seen on both of the gate sweeps, characteristic of switching. The position of the AFM tip also affects the defect state occupation which is observed in the conductance images as two levels with a sharp transition. Two levels are visible in (b) as a brighter region in the centre, and in (e) as two darker regions around the ends of the gate electrodes. Fig. 3 (c) and (f) show single y direction sweeps taken from the corresponding conductance images where they are indicated by arrows. The switch step size is constant over each image at $`3.8\mu `$S for (b), which would require a change of 9.3mV in $`V_{gate}`$ to produce an equivalent effect. We estimate that a change in gate bias corresponds to 13.3 meV/V change in channel potential , which gives an energy change of $`120\mu `$eV due to the switch. From the conductance image we know the defect is located approximately 100 nm from the channel centre, and probably in the donor layer between 40 nm and 80 nm above the 2DEG plane. If the occupation of the defect state is modelled as a point electron source, then to provide the correct energy change the electron is required to be $`1\mu `$m from the channel, and although screening has been ignored, the model is poor. We propose that the source of the switch is a single electron hopping between two nearby defect states. Possible locations for these defects are indicated on the conductance image by $``$ and $``$, where $``$ is occupied to produce the background level. The defects are at positions $`z_{}=60`$ nm and $`z_{}=64`$ nm above the 2DEG plane, and are separated by 10.5 nm which is of the order of the average dopant spacing at 4.3 nm and near the effective Bohr radius for GaAs. The two dark switched regions of (e) have the same energy change which suggests that they originate from the same defect system. When the tip is positioned over the channel, and the nearby 2DEG, the switch remains in the background state. This suggests that this defect system is physically below the channel where electrons in the channel and 2DEG provide screening. Again two defect states are required to provide the correct energy change of about $`100\mu `$eV, at possible $`z_{}=110`$ nm and $`z_{}=100`$ nm, and indicated on the image with $``$ as $`x_{}=x_{}`$ and $`y_{}=y_{}`$. When the tip is positioned near the gate electrode ends where the 2DEG is depleted, the tip potential can penetrate behind the channel, raising the potential of state $``$ relative to state $``$, as state $``$ is closer to the channel and is more effectively screened. Further from the channel, but where the 2DEG is still depleted, the tip has less effect and state $``$ is again occupied. ### 3.4 Images of channel movement due to asymmetric gate bias For the previous experiments both gate electrodes were biased to the same potential. By asymmetrically biasing the surface electrodes we observe movement of the channel centre in conductance images. Channel lateral position has been studied theoretically , and a total movement of $`\mathrm{\Delta }x=100`$ nm has been deduced by the effect of a defect within the channel , though this enabled the channel centre position to be measured in only one direction. For this experiment we used a slightly wider 800 nm split gate orientated in the y direction, but on the same device and with the same AFM tip. Fig. 4 shows three conductance images made with asymmetric gate biases $`\mathrm{\Delta }V_{gate}=V_{upper}V_{lower}`$ of 3.55 V, 0 V, and $`4.27`$ V respectively, with average gate biases $`\overline{V}_{gate}=(V_{upper}+V_{lower})/2`$ of $`3.14`$ V, $`2.92`$ V, and $`3.06`$ V set to obtain the same initial conductance. Note that these images were produced consecutively in the order (b), (c), then (a) to avoid the possibility of misinterpreting mechanical drift as channel movement. The channel centre was determined in both the x and y directions by fitting a parabola to the region around the conductance minima. Fig. 4 (d), (e), and (f) show y direction single sweeps reproduced from the respective conductance images where they are indicated by arrows, and were selected to include the channel centre in the x direction. As is evident from the single sweeps the perturbing potential is itself not symmetric, which is why a fit to $`V_{tip}/\sqrt{\rho ^2+(d+r_{tip})^2}`$ was rejected, and is probably caused by screening of the tip potential by the surface electrodes. The channel centres are indicated on the conductance images by $`+`$, $``$, and $`\times `$ at positions relative to $``$ of (98 nm, 52 nm), (0 nm, 0 nm), and ($`103`$ nm, $`50`$ nm). The unexpected channel movement in the x direction is caused by imperfections in the surface electrodes, or disorder from the doped layer, which lead to imperfections in the confining potential. The effect is principally seen in the x direction due to the much smaller x component of the confining potential gradient, and therefore increased sensitively to confining potential imperfections. ## 4 Conclusion We have used a conductive AFM tip to modify the conductance of a 1D ballistic channel. By modelling the tip as a spherical charge we obtain a good fit to the observed tip potential perturbation. We have made measurements which reveal structure across the width of the channel, corresponding to the predicted charge density for one, two, and three sub-bands. We have produced images with two levels due to an electron hopping between defect states. When the defect system was beneath the 2DEG plane we obtained images revealing the 1D channel which screened the tip potential. We have observed channel movement when the surface electrodes were biased asymmetrically. Movement along the length of the channel is believed to be due to electrode imperfections. ## Acknowledgments This work was funded by the EPSRC and the RW Paul Instrument Fund.
no-problem/9909/hep-lat9909010.html
ar5iv
text
# Even flavor QED3 in an external magnetic field NTUA-TH-76/99 ## 1 INTRODUCTION Considerations of the effects of external magnetic fields in the Early Universe and in problems of high $`T_c`$ superconductivity (, ) are the main motivations to study the behaviour of the fermionic matter under the influence of an external magnetic field. The three-dimensional continuum Lagrangian of the model is given by: $$=\frac{1}{4}(F_{\mu \nu })^2+\overline{\mathrm{\Psi }}D_\mu \gamma _\mu \mathrm{\Psi }m\overline{\mathrm{\Psi }}\mathrm{\Psi },$$ (1) where $`D_\mu =_\mu iga_\mu ^SieA_\mu ;`$ $`a_\mu ^S`$ is a fluctuating gauge field, while $`A_\mu `$ represents the external gauge field. The main object of interest here is the condensate $`<\overline{\mathrm{\Psi }}\mathrm{\Psi }>,`$ which is the coincidence limit of the fermion propagator, $`S_F(x,y).`$ ## 2 RESULTS IN THE CONTINUUM A first estimate for the enhancement of the condensate arising from the external fields may be gained through the analysis of the relevant Schwinger-Dyson equation: $$S_F^1(p)=\gamma p$$ $$g\frac{d^3k}{(2\pi )^3}\gamma ^\mu S_F(k)\mathrm{\Gamma }^\nu (k,pk)D_{\mu \nu }(pk)$$ (2) where $`\mathrm{\Gamma }^\nu `$ is the fermion-photon vertex function and $`D_{\mu \nu }`$ is the exact photon propagator (). The results of a recent approximate solution of the above equation in the regime of small homogeneous external magnetic field, both for quenched and dynamical fermions are depicted in figure 1, where one may see the dynamical mass generated, versus the magnetic field strength. The upper curve (labeled “Q,Int-solve”) is the solution for quenched fermions, while the lower curve is the weak field approximation to the dynamical fermionic condensate. We have also included the weak field approximation to the quenched result, as a measure of the reliability of the weak field expansion. There have also been approximations in the regime of strong magnetic fields , but for a fully quantitative treatment one should rely on the lattice approach (). ## 3 LATTICE RESULTS We will first present the results for the $`T=0`$ case and a homogeneous magnetic field. Figure 2 contains the condensate versus the gauge field coupling constant, $`\beta _g,`$ for several values of the magnetic field. This figure is the final outcome of a series of measurements performed at several values of the bare mass; the result shown here is the extrapolation to the zero mass limit. The result is independent from the magnetic field at strong gauge coupling, because the gauge interactions are the main contributor to the condensate in this regime; the magnetic field takes over in the weak coupling, on the right hand part of the diagram. Figure 3 contains the condensate versus $`\beta _g`$ for a typical fixed value of the magnetic field. The uppermost data correspond to a symmetric lattice $`16^3.`$ The curve is smooth and no sign of discontinuity can be seen anywhere. The next result comes from an asymmetric lattice, with a rather large time extent, though: $`24^2\times 6.`$ A structure starts showing up at $`\beta _g0.45.`$ To see better this structure, we go to the $`16^2\times 4`$ lattice; the structure moves to $`\beta _g0.40`$ and becomes somehow more steep. The effect of the spatial size of the lattice is not big, as one may see by comparing the points for $`16^2\times 4`$ versus the ones for $`24^2\times 4,`$ which are also shown on the figure. Finally the points for $`16^2\times 2`$ show a clear discontinuity in the slope. Although it deserves more detailed study, it seems safe to interpret this discontinuity as a symmetry restoring phase transition, imposed by the nonzero temperature. In figure 4 we use the results for various lattice sizes to construct a graph showing the dependence of the condensate on the temperature for two values of the external magnetic field. It is surprisingly similar to the corresponding result for the “free” case (). The condensate tends to zero for large temperatures; this tendency is more intense for the smallest magnetic field. In the last figure we show the results of a study of a non-homogeneous magnetic field. We consider a lattice in which its central $`6\times 6`$ region (for all values of z) carries a constant magnetic flux, while in the rest part the magnetic field vanishes (); we measure the condensate along a straight line passing from the center of the lattice at a fixed value of the magnetic field. This profile is shown in figure 5 and one may see that the condensate is non-zero only in the region of non-vanishing magnetic flux. Away from this region the remaining condensate may be accounted for by the explicit mass term and has little to do with the external magnetic field. ## Acknowledgements K.F. and G.K. would like to acknowledge financial support from the TMR project “Finite temperature phase transitions in particle Physics”, EU contract number: FMRX-CT97-0122. The work of N.E.M. is partially supported by PPARC (UK) through an Advanced Fellowship. The work of A.M. is supported by PPARC.
no-problem/9909/hep-th9909205.html
ar5iv
text
# Brane-World Black Holes ## 1 Introduction There has been much recent interest in the idea that our universe may be a brane embedded in some higher dimensional space. It has been shown that the hierarchy problem can be solved if the higher dimensional Planck scale is low and the extra dimensions large . An alternative solution, proposed by Randall and Sundrum (RS), assumes that our universe is a negative tension domain wall separated from a positive tension wall by a slab of anti-de Sitter (AdS) space . This does not require a large extra dimension: the hierarchy problem is solved by the special properties of AdS. The drawback with this model is the necessity of a negative tension object. In further work , RS suggested that it is possible to have an infinite extra dimension. In this model, we live on a positive tension domain wall inside anti-de Sitter space. There is a bound state of the graviton confined to the wall as well as a continuum of Kaluza-Klein (KK) states. For non-relativistic processes on the wall, the bound state dominates over the KK states to give an inverse square law if the AdS radius is sufficiently small. It appears therefore that four dimensional gravity is recovered on the domain wall. This conclusion was based on perturbative calculations for zero thickness walls. Supergravity domain walls of finite thickness have recently been considered and a non-perturbative proof that the bound state exists for such walls was given in . It is important to examine other non-perturbative gravitational effects in this scenario to see whether the predictions of four dimensional general relativity are recovered on the domain wall. If matter trapped on a brane undergoes gravitational collapse then a black hole will form. Such a black hole will have a horizon that extends into the dimensions transverse to the brane: it will be a higher dimensional object. Phenomenological properties of such black holes have been discussed in for models with large extra dimensions. In this paper we discuss black holes in the RS models. A natural candidate for such a hole is the Schwarzschild-AdS solution, describing a black hole localized in the fifth dimension. We show in the Appendix that it is not possible to intersect such a hole with a vacuum domain wall so it is unlikely that it could be the final state of gravitational collapse on the brane. A second possibility is that what looks like a black hole on the brane is actually a black string in the higher dimensional space. We give a simple solution describing such a string. The induced metric on the domain wall is simply Schwarzschild, as it has to be if four dimensional general relativity (and therefore Birkhoff’s theorem) are recovered on the wall. This means that the usual astrophysical properties of black holes (e.g. perihelion precession, light bending etc.) are recovered in this scenario. We find that the AdS horizon is singular for this black string solution. This is signalled by scalar curvature invariants diverging if one approaches the horizon along the axis of the string. If one approaches the horizon in a different direction then no scalar curvature invariant diverges. However, in a frame parallelly propagated along a timelike geodesic, some curvature components do diverge. Furthermore, the black string is unstable near the AdS horizon - this is the Gregory-Laflamme instability . However, the solution is stable far from the AdS horizon. We will argue that our solution evolves to a “black cigar” solution describing an object that looks like the black string far from the AdS horizon (so the metric on the domain wall is Schwarzschild) but has a horizon that closes off before reaching the AdS horizon. In fact, we conjecture that this black cigar solution is the unique stable vacuum solution in five dimensions which describes the endpoint of gravitational collapse on the brane. We suspect that the AdS horizon will be non-singular for the cigar solution. ## 2 The Randall-Sundrum models Both models considered by RS use five dimensional AdS. In horospherical coordinates the metric is $$ds^2=e^{2y/l}\eta _{ij}dx^idx^j+dy^2$$ (2.1) where $`\eta _{\mu \nu }`$ is the four dimensional Minkowski metric and $`l`$ the AdS radius. The global structure of AdS is shown in figure 1. Horospherical coordinates break down at the horizon $`y=\mathrm{}`$. In their first model , RS slice AdS along the horospheres at $`y=0`$ and $`y=y_c>0`$, retain the portion $`0<y<y_c`$ and assume $`Z_2`$ reflection symmetry at each boundary plane. This gives a jump in extrinsic curvature at these planes, yielding two domain walls of equal and opposite tension $$\sigma =\pm \frac{6}{\kappa ^2l}$$ (2.2) where $`\kappa ^2=8\pi G`$ and $`G`$ is the five dimensional Newton constant. The wall at $`y=0`$ has positive tension and the wall at $`y=y_c`$ has negative tension. Mass scales on the negative tension wall are exponentially suppressed relative to those on the positive tension one. This provides a solution of the hierarchy problem provided we live on the negative tension wall. The global structure is shown in figure 1. The second RS model is obtained from the first by taking $`y_c\mathrm{}`$. This makes the negative tension wall approach the AdS horizon, which includes a point at infinity. RS say that their model contains only one wall so presumably the idea is that the negative tension brane is viewed as an auxiliary device to set up boundary conditions. However, if the geometry makes sense then it should be possible to discuss it without reference to this limiting procedure involving negative tension objects. If one simply slices AdS along a positive tension wall at $`y=0`$ and assumes reflection symmetry then there are several ways to analytically continue the solution across the horizon. These have been discussed in . There are two obvious choices of continuation. The first is simply to assume that beyond the horizon, the solution is pure AdS with no domain walls present. This is shown in figure 2. An alternative, which seems closer in spirit to the geometry envisaged by RS, is to include further domain walls beyond the horizon, as shown in figure 2. In this case, there are infinitely many domain walls present. ## 3 Black string in AdS Let us first rewrite the AdS metric 2.1 by introducing the coordinate $`z=le^{y/l}`$. The metric is then manifestly conformally flat: $$ds^2=\frac{l^2}{z^2}(dz^2+\eta _{ij}dx^idx^j).$$ (3.1) In these coordinates, the horizon lies at $`z=\mathrm{}`$ while the timelike infinity of AdS is at $`z=0`$. We now note that if the Minkowski metric within the brackets is replaced by any Ricci flat metric then the Einstein equations (with negative cosmological constant) are still satisfied<sup>1</sup><sup>1</sup>1 This procedure was recently discussed for general p-brane solutions in .. A natural choice for a metric describing a black hole on a domain wall at fixed $`z`$ is to take this Ricci flat metric to be the Schwarzschild solution: $$ds^2=\frac{l^2}{z^2}(U(r)dt^2+U(r)^1dr^2+r^2(d\theta ^2+\mathrm{sin}^2\theta d\varphi ^2)+dz^2)$$ (3.2) where $`U(r)=12M/r`$. This metric describes a black string in AdS. Including a reflection symmetric domain wall in this spacetime is trivial: surfaces of constant $`z`$ satisfy the Israel equations provided the domain wall tension satisfies equation 2.2. For a domain wall at $`z=z_0`$, introduce the coordinate $`w=zz_0`$. The metric on both sides of the wall can then be written $$ds^2=\frac{l^2}{(|w|+z_0)^2}(U(r)dt^2+U(r)^1dr^2+r^2(d\theta ^2+\mathrm{sin}^2\theta d\varphi ^2)+dw^2)$$ (3.3) with $`\mathrm{}<w<\mathrm{}`$ and the wall is at $`w=0`$. It would be straightforward to use the same method to construct a black string solution in the presence of a thick domain wall. The induced metric on a domain wall placed at $`z=z_0`$ can be brought to the standard Schwarzschild form by rescaling the coordinates $`t`$ and $`r`$. The ADM mass as measured by an inhabitant of the wall would be $`M_{}=Ml/z_0`$. The proper radius of the horizon in five dimensions is $`2M_{}`$. The AdS length radius $`l`$ is required to be within a few orders of magnitude of the Planck length so black holes of astrophysical mass must have $`M/z_01`$. If one included a second domain wall with negative tension then the ADM mass on that wall would be exponentially suppressed relative to that on the positive tension wall. Our solution has an Einstein metric so the Ricci scalar and square of the Ricci tensor are finite everywhere. However the square of the Riemann tensor is $$R_{\mu \nu \rho \sigma }R^{\mu \nu \rho \sigma }=\frac{1}{l^4}\left(40+\frac{48M^2z^4}{r^6}\right),$$ (3.4) which diverges at the AdS horizon $`z=\mathrm{}`$ as well as at the black string singularity at $`r=0`$. We shall have more to say about this later. It is important to examine the behaviour of geodesics in this spacetime. Let $`u`$ denote the velocity along a timelike or null geodesic with respect to an affine parameter $`\lambda `$ (taken to be the proper time in the case of a timelike geodesic). The Killing vectors $`k=/t`$ and $`m=/\varphi `$ give rise to the conserved quantities $`E=ku`$ and $`L=mu`$. Rearranging these gives $$\frac{dt}{d\lambda }=\frac{Ez^2}{U(r)l^2}$$ (3.5) and $$\frac{d\varphi }{d\lambda }=\frac{Lz^2}{r^2l^2},$$ (3.6) for motion in the equatorial plane ($`\theta \pi /2`$). The equation describing motion in the $`z`$-direction is simply $$\frac{d}{d\lambda }\left(\frac{1}{z^2}\frac{dz}{d\lambda }\right)=\frac{\sigma }{zl^2},$$ (3.7) where $`\sigma =0`$ for null geodesics and $`\sigma =1`$ for timelike geodesics. The solutions for null geodesics are $`z=\mathrm{constant}`$ or $$z=\frac{z_1l}{\lambda },$$ (3.8) The solution for timelike geodesics is $$z=z_1\mathrm{cosec}(\lambda /l).$$ (3.9) In both cases, $`z_1`$ is a constant and we have shifted $`\lambda `$ so that $`z\mathrm{}`$ as $`\lambda 0`$. The (null) solution $`z=\mathrm{const}`$ is simply a null geodesic of the four dimensional Schwarzschild solution. We are more interested in the other solutions because they appear to reach the singularity at $`z=\mathrm{}`$. The radial motion is given by $$\left(\frac{dr}{d\lambda }\right)^2+\frac{z^4}{l^4}\left[\left(\frac{l^2}{z_1^2}+\frac{L^2}{r^2}\right)U(r)E^2\right]=0.$$ (3.10) Now introduce a new parameter $`\nu =z_1^2/\lambda `$ for null geodesics and $`\nu =(z_1^2/l)\mathrm{cot}(\lambda /l)`$ for timelike geodesics. We also define new coordinates $`\stackrel{~}{r}=z_1r/l`$, $`\stackrel{~}{t}=z_1t/l`$, and new constants $`\stackrel{~}{E}=z_1E/l`$, $`\stackrel{~}{L}=z_1^2L/l^2`$ and $`\stackrel{~}{M}=z_1M/l`$. The radial equation becomes $$\left(\frac{d\stackrel{~}{r}}{d\nu }\right)^2+\left(1+\frac{\stackrel{~}{L}^2}{\stackrel{~}{r}^2}\right)\left(1\frac{2\stackrel{~}{M}}{\stackrel{~}{r}}\right)=\stackrel{~}{E}^2,$$ (3.11) which is the radial equation for a timelike geodesic in a four dimensional Schwarzschild solution of mass $`\stackrel{~}{M}`$ . (This is the ADM mass for an observer with $`z=z_0=l^2/z_1`$.) Note that $`\nu `$ is the proper time along such a geodesic. It should not be surprising that a null geodesic in five dimensions is equivalent to a timelike geodesic in four dimensions: the non-trivial motion in the fifth dimension gives rise to a mass in four dimensions. What is perhaps surprising is the relationship between the four and five dimensional affine parameters $`\nu `$ and $`\lambda `$. We are interested in the behaviour near the singularity, i.e. as $`\lambda 0`$. This is equivalent to $`\nu \mathrm{}`$ i.e. we need to study the late time behaviour of four dimensional timelike geodesics. If such geodesics hit the Schwarzschild singularity at $`\stackrel{~}{r}=0`$ then they do so at finite $`\nu `$. For infinite $`\nu `$ there are two possibilities . The first is that the geodesic reaches $`\stackrel{~}{r}=\mathrm{}`$. The second can occur only if $`\stackrel{~}{L}^2>12\stackrel{~}{M}^2`$, when it is possible to have bound states (i.e. orbits restricted to a finite range of $`\stackrel{~}{r}`$) outside the Schwarzschild horizon. The orbits that reach $`\stackrel{~}{r}=\mathrm{}`$ have late time behaviour $`\stackrel{~}{r}\nu \sqrt{\stackrel{~}{E}^21}`$ and hence $$r\frac{z_1l}{\lambda }\sqrt{\stackrel{~}{E}^21}$$ (3.12) as $`\lambda 0`$. Along such geodesics, the squared Riemann tensor does not diverge. The bound state geodesics behave differently. These remain at finite $`r`$ and therefore the square of the Riemann tensor does diverge as $`\lambda 0`$. They orbit the black string infinitely many times before hitting the singularity, but do so in finite affine parameter. It appears that some geodesics encounter a curvature singularity at the AdS horizon whereas others might not because scalar curvature invariants do not diverge along them. It is possible that only part of the surface $`z=\mathrm{}`$ is singular. To decide whether or not this is true, we turn to a calculation of the Riemann tensor in an orthonormal frame parallelly propagated along a timelike geodesic that reaches $`z=\mathrm{}`$ but for which the squared Riemann tensor does not diverge (i.e. a non-bound state geodesics). The tangent vector to such a geodesic (with $`L=0`$) can be written $$u^\mu =(\frac{z}{l}\sqrt{\frac{z^2}{z_1^2}1},\frac{Ez^2}{U(r)l^2},\frac{z^2}{l^2}\sqrt{E^2\frac{l^2}{z_1^2}U(r)},0,0),$$ (3.13) where we have written the components in the order $`(z,t,r,\theta ,\varphi )`$. A unit normal to the geodesic is $$n^\mu =(0,\frac{zz_1}{l^2U(r)}\sqrt{E^2\frac{l^2}{z_1^2}U(r)},\frac{Ez_1z}{l^2},0,0).$$ (3.14) It is straightforward to check that this is parallelly propagated along the geodesic i.e. $`un^\mu =0`$. These two unit vectors can be supplemented by three other parallelly propagated vectors to form an orthonormal set. However the divergence can be exhibited using just these two vectors. One of the curvature components in this parallelly propagated frame is $$R_{(u)(n)(u)(n)}R_{\mu \nu \rho \sigma }u^\mu n^\nu u^\rho n^\sigma =\frac{1}{l^2}\left(1\frac{2Mz^4}{z_1^2r^3}\right),$$ (3.15) which diverges along the geodesic as $`\lambda 0`$. The black string solution therefore has a curvature singularity at the AdS horizon. It is well known that black string solutions in asymptotically flat space are unstable to long wavelength perturbations . A black hole is entropically preferred to a sufficiently long segment of string. The string’s horizon therefore has a tendency to “pinch off” and form a line of black holes. One might think that a similar instability occurs for our solution. However, AdS acts like a confining box which prevents fluctuations with wavelengths much greater than $`l`$ from developing. If an instability occurs then it must do so at smaller wavelengths. If the radius of curvature of the string’s horizon is sufficiently small then the AdS curvature will be negligible there and the string will behave as if it were in asymptotically flat space. This means that it will be unstable to perturbations with wavelengths of the order of the horizon radius $`2M_{}=2Ml/z`$. At sufficiently large $`z`$, such perturbations will fit into the AdS box, i.e. $`2M_{}l`$, so an instability can occur near the AdS horizon. However for $`M/z1`$, the potential instability occurs at wavelengths much greater than $`l`$ and is therefore not possible in AdS. Therefore the black string solution is unstable near the AdS horizon but stable far from it. We conclude that, near the AdS horizon, the black string has a tendency to “pinch off” but further away it is stable. After pinching off, the string becomes a stable “black cigar” which would extend to infinity in AdS if the domain wall were not present, but not to the AdS horizon. The cigar’s horizon acts as if it has a tension which balances the force pulling it towards the centre of AdS. We showed above that if our domain wall is at $`z=z_0`$ then a black hole of astrophysical mass has $`M/z_01`$, corresponding to the part of the black cigar far from the AdS horizon. Here, the metric will be well approximated by the black string metric so the induced metric on the wall will be Schwarzschild and the predictions of four dimensional general relativity will be recovered. ## 4 Discussion Any phenomenologically successful theory in which our universe is viewed as a brane must reproduce the large-scale predictions of general relativity on the brane. This implies that gravitational collapse of uncharged non-rotating matter trapped on the brane ultimately settles down to a steady state in which the induced metric on the brane is Schwarzschild. In the higher dimensional theory, such a solution could be a localized black hole or an extended object intersecting the brane. We have investigated these alternatives in the models proposed by Randall and Sundrum (RS). The obvious choice of five dimensional solution in the first case is Schwarzschild-AdS. However we have shown (in the Appendix) that it is not possible to intersect this with a vacuum domain wall so it cannot be the final state of gravitational collapse on the wall. We have presented a solution that describes a black string in AdS. It is possible to intersect this solution with a vacuum domain wall and the induced metric is Schwarzschild. The solution can therefore be interpreted as a black hole on the wall. The AdS horizon is singular. Scalar curvature invariants only diverge if this singularity is approached along the axis of the string. However, curvature components diverge in a frame parallelly propagated along any timelike geodesic that reaches the horizon. This singularity can be removed if we use the first RS model in which there are two domain walls present and we live on a negative tension wall. However if we wish to use the second RS model (with a non-compact fifth dimension) then the singularity will be visible from our domain wall. In , it was argued that anything emerging from a singularity at the AdS horizon would be heavily red-shifted before reaching us and that this might ensure that physics on the wall remains predictable in spite of the singularity. However we regard singularities as a pathology of the theory since, in principle, arbitrarily large fluctuations can emerge from the singularity and the red-shift is finite. Fortunately, it turns out that our solution is unstable near the AdS horizon. We have suggested that it will decay to a stable configuration resembling a cigar that extends out to infinity in AdS but does not reach the AdS horizon. The solution becomes finite in extent when the gravitational effect of the domain wall is included. Our domain wall is situated far from the AdS horizon so the induced metric on the wall will be very nearly Schwarzschild. Since the cigar does not extend as far as the AdS horizon, it does not seem likely that there will be a singularity there. Similar behaviour was recently found in a non-linear treatment of the RS model . It was shown that pp-waves corresponding to Kaluza-Klein modes are singular at the AdS horizon. These pp-waves are not localized to the domain wall. The only pp-waves regular at the horizon are the ones corresponding to gravitons confined to the wall. We suspect that perturbations of the flat horospheres of AdS that do not vanish near the horizon will generically give rise to a singularity there. It seems likely that there are other solutions that give rise to the Schwarzschild solution on the domain wall. For example, the metric outside a star on the wall would be Schwarzschild. If the cigar solution was the only stable solution giving Schwarzschild on the wall then it would have to be possible to intersect it with a non-vacuum domain wall describing such a star. However, it is then not possible to choose the equation of state for the matter on the wall, for reasons analogous to those discussed in the Appendix. Our solution is therefore not capable of describing generic stars. If this is the case then one might wonder whether there are other solutions describing black holes on the wall. We conjecture that the cigar solution is the unique stable vacuum solution with a regular AdS horizon that describes a non-rotating uncharged black hole on the domain wall. Acknowledgments We have benefitted from discussions with D. Brecher and G. Gibbons. AC was supported by Pembroke College, Cambridge. ## Appendix One candidate for a black hole formed by gravitational collapse on a domain wall in AdS is the Schwarzschild-AdS solution, which has metric $$ds^2=U(r)dt^2+U(r)^1dr^2+r^2(d\chi ^2+\mathrm{sin}^2\chi d\mathrm{\Omega }^2),$$ (1) where $`d\mathrm{\Omega }^2`$ is the line element on a unit 2-sphere and $$U(r)=1\frac{2M}{r^2}+\frac{r^2}{l^2}.$$ (2) The parameter $`M`$ is related to the mass of the black hole. We have not yet included the gravitational effect of the wall. We shall focus on the second RS model so we want a single positive tension domain wall with the spacetime reflection symmetric in the wall. Denote the spacetime on the two sides of the wall as $`(+)`$ and $`()`$. Let $`n`$ be a unit (spacelike) normal to the wall pointing out of the $`(+)`$ region. The tensor $`h_{\mu \nu }=g_{\mu \nu }n_\mu n_\nu `$ projects vectors onto the wall, and its tangential components give the induced metric on the wall. The extrinsic curvature of the wall is defined by $$K_{\mu \nu }=h_\mu ^\rho h_\nu ^\sigma _\rho n_\sigma $$ (3) and its trace is $`K=h^{\mu \nu }K_{\mu \nu }`$. The energy momentum tensor $`t_{\mu \nu }`$ of the wall is given by varying its action with respect to the induced metric. The gravitational effect of the domain wall is given by the Israel junction conditions , which relate the discontinuity in the extrinsic curvature at the wall to its energy momentum: $$[K_{\mu \nu }Kh_{\mu \nu }]_{}^+=\kappa ^2t_{\mu \nu }$$ (4) (see for a simple derivation of this equation). Here $`\kappa ^2=8\pi G`$ where $`G`$ is the five dimensional Newton constant. This can be rearranged using reflection symmetry to give $$K_{\mu \nu }=\frac{\kappa ^2}{2}\left(t_{\mu \nu }\frac{t}{3}h_{\mu \nu }\right),$$ (5) where $`t=h^{\mu \nu }t_{\mu \nu }`$. Cylindrical symmetry dictates that we should consider a domain wall with position given by $`\chi =\chi (r)`$. The unit normal to the $`(+)`$ side can be written $$n=\frac{ϵr}{\sqrt{1+Ur^2\chi _{}^{}{}_{}{}^{2}}}\left(d\chi \chi ^{}dr\right),$$ (6) where $`ϵ=\pm 1`$ and a prime denotes a derivative with respect to $`r`$. The timelike tangent to the wall is $$u=U^{1/2}\frac{}{t},$$ (7) and the spacelike tangents are $$t=\sqrt{\frac{U}{1+Ur^2\chi ^2}}\left(\chi ^{}\frac{}{\chi }+\frac{}{r}\right),$$ (8) $$e_\theta =\frac{1}{r\mathrm{sin}\chi }\frac{}{\theta },$$ (9) $$e_\varphi =\frac{1}{r\mathrm{sin}\chi \mathrm{sin}\theta }\frac{}{\varphi }.$$ (10) The non-vanishing components of the extrinsic curvature in this basis are $$K_{uu}=\frac{ϵU^{}r\chi ^{}}{2\sqrt{1+Ur^2\chi _{}^{}{}_{}{}^{2}}},$$ (11) $$K_{\theta \theta }=K_{\varphi \varphi }=\frac{ϵ}{\sqrt{1+Ur^2\chi _{}^{}{}_{}{}^{2}}}\left(\frac{\mathrm{cot}\chi }{r}U\chi ^{}\right),$$ (12) $$K_{tt}=\frac{ϵ}{\left(1+Ur^2\chi _{}^{}{}_{}{}^{2}\right)^{3/2}}\left(\chi _{}^{}{}_{}{}^{3}U^2r^2+2\chi ^{}U+Ur\chi ^{\prime \prime }+U^{}r\chi ^{}/2\right).$$ (13) A vacuum domain wall has $$t_{\mu \nu }=\sigma h_{\mu \nu },$$ (14) where $`\sigma `$ is the wall’s tension. The Israel conditions are $$K_{\mu \nu }=\frac{\kappa ^2}{6}\sigma h_{\mu \nu }.$$ (15) These reduce to $$K_{uu}=K_{tt}=K_{\theta \theta }=\frac{\kappa ^2}{6}\sigma .$$ (16) It is straightforward to verify that these equations have no solution. A solution can be found for a non-vacuum domain wall with energy-momentum tensor $$t_{\mu \nu }=\mathrm{diag}(\sigma ,p,p,p,0),$$ (17) since then we have three unknown functions ($`\sigma (r),p(r),\chi (r)`$) and three equations. However this does not allow an equation of state to be specified in advance. We are only interested in vacuum solutions since these describe the final state of gravitational collapse on the brane.
no-problem/9909/quant-ph9909024.html
ar5iv
text
# Thermal fluctuations in macroscopic quantum memory ## I Introduction Quantum memory is a device capable of reliably storing linear superpositions of quantum states. It will be a part of quantum computer when (if) that latter is finally built and may be useful for other applications as well. (For a recent review of quantum computing with an emphasis on fault tolerance see ref. .) To work as quantum memory, a physical system must satisfy a number of requirements. First, it must have at least two fairly stable quantum states. These states form a basis for linear combinations that can be stored in the device. For example, the basis may be formed by perturbative quantum states built near local energy minima, and stability of the basis states may be ensured by a large potential barrier separating them. In such cases we will loosely refer to the basis states as the ground states, or vacua, even though these ground states may not be degenerate in energy and in some cases may contain localized excitations. We note, though, that for some purposes it may be desirable to have ground states that actually are (nearly) degenerate in energy. If two basis states $`|\psi _1`$ and $`|\psi _2`$ forming a linear combination $$|\psi =c_1|\psi _1+c_2|\psi _2$$ (1) are degenerate, the ratio $`c_2/c_1`$ will be preserved by the evolution. When the basis states are not degenerate, the relative magnitude of $`c_1`$ and $`c_2`$ will be preserved, but not the relative phase. The relative magnitude, however, can be arbitrary. In comparison, a classical two-state system will only store two values, referred to as 0 and 1. For long-time quantum storage, one will probably need to build in some redundancy, so that the basis states refer to many microscopic (local) degrees of freedom. However, redundancy is helpful in protecting quantum information only when the local degrees of freedom in the basis states are sufficiently entangled, i.e. the basis states cannot be identified by local measurements. This condition rules out, in particular, any system in which a ground state degeneracy is due merely to spontaneous symmetry breaking by a local order parameter. (To see why, consider an easy-axis magnet, in which magnetization can be in one of two directions. The direction of magnetization can be found by measuring local magnetization in a relatively small region.) The reason why entanglement of local degrees of freedom is necessary for long-time quantum storage is that local measurements will in effect be performed by external noise, and if they can indeed distinguish between the basis states they will destroy the stored quantum information (the Schrödinger-cat scenario). In a real sample, tunneling transitions between the basis states will cause quantum memory to deteriorate. Nevertheless, if the basis states are sufficiently entangled, tunneling between them will have to involve many local degrees of freedom, and the tunneling probability will be strongly suppressed. Known examples include fractional quantum Hall and similar types of rigid ground states on tori. In these cases, a typical tunneling fluctuation consists of creating a vortex-antivortex pair, transporting the vortex and the antivortex around the torus, along topologically distinct paths, and then annihilating the pair. It has been argued that, at zero temperature, the tunneling probability (and the associated energy splitting between the ground states) is generically of order $`\mathrm{exp}(L/l)`$ where $`L`$ is the size of the system, and $`l`$ is some correlation length. So, the zero-temperature tunneling should not be a problem in practice, as long as one can keep the size of the system sufficiently large. Of more concern are thermal fluctuations. At finite temperature, there will be a sea of vortex-antivortex pairs, with density proportional to $`\mathrm{exp}(F_0/T)`$, where $`F_0`$ is the free energy of a single vortex. (This assumes that the temperature is still low enough, so that no phase transition occurs.) One expects that motion of these “preexisting” vortices can effect transitions between ensembles built near different ground states. The question is, then, what is the rate of such transitions, at a given temperature $`T`$. A single transition is sufficient to destroy the stored quantum information. So, the rate of the transitions will also be the rate at which the quantum information “degrades”, and the corresponding time will be an estimate for the maximal duration of reliable storage. In this paper, we will compute in one case and estimate in another the rates of finite-temperature transitions between ensembles corresponding to different ground states for some of the simplest systems exhibiting multiple ground states and macroscopic entanglement. One system is a type-II superconducting film grown on the surface of a torus. In Sect. 2 we review the origin of multiple ground states in a type-II superconductor on a torus. The presence of multiple ground states in this case can be seen either via manipulations with vortices and single electrons, which produce a nontrivial phase when transported around each other, or via a semiclassical argument. Transitions between different classical vacua are topological transitions, which change a winding number of the gauge and Higgs fields. In Sect. 3 we construct a correlator that measures the rate of topological transitions at finite temperature. This correlator is analogous to the one proposed in ref. to measure the rate of topological transitions in the electroweak theory. In Sect. 4 we compute the rate. The main ingredient of the computation is that vortices are well separated, and their motion is diffusive, i.e. associated with a large viscosity. Our main result, for a film of fixed thickness, is that although the rate of topological transitions is indeed proportional to the vortex density, and so is not suppressed exponentially with $`L`$ (the size of the system), there is a power-law suppression. This suppression can be described by saying that there is no volume enhancement of the rate, i.e. as long as the two dimensions of the torus stay comparable, the rate will not grow with the total volume (while the total number of vortices of course will). Equivalently, the rate per unit volume will decrease with the total volume. This absence of macroscopic enhancement is directly related to the diffusive nature of the vortex motion. It is easy to redesign the device so that the suppression of the finite-temperature rate becomes exponential with $`L`$. Imagine making the superconducting film thicker, so that vortices resolve into Abrikosov flux lines; the free energy of those grows linearly with their length. In the limiting case, which is the second system we consider, the entire solid torus is superconducting, and a topological transition is mediated by a well defined critical fluctuation—a critical flux line, whose energy is proportional to $`L`$. The Boltzmann factor in the rate is $`\mathrm{exp}(E_0/T)`$, where now $`E_0L`$, so the finite-temperature rate is suppressed exponentially with $`L`$. The zero-temperature tunneling rate is suppressed even stronger, as an exponential of $`L^2`$. So, a solid superconducting torus (or a wire, or a ring, or a hollow cylinder) is a good candidate for stable quantum memory. In the concluding section we discuss a possible way of writing quantum information to and reading it from this device. We nevertheless retain interest in the two-dimensional case (the film), because a universal quantum computation is theoretically possible with non-Abelian anyons , and systems in which those have been argued to occur are two-dimensional. In the concluding section we also discuss whether our results teach us anything about these more complex cases. ## II Ground states of a toroidal superconductor Existence of multiple ground states in a type-II superconductor on a torus can be deduced from the presence of two types of local excitations, vortices and single electrons, with their corresponding values of flux and charge. It can also be obtained from an explicit semiclassical construction of the ground states. In this section, we use the Ginzburg-Landau (GL) theory for description of the ground states. We interpret the GL expression for energy as an effective Hamiltonian for slow degrees of freedom (rather than as a thermodynamic potential, like free energy). So, we treat the GL fields as quantum fields. The GL Hamiltonian of a superconductor is $$H=d^3x\left(\zeta |(+ig\text{A})\psi |^2a|\psi |^2+b|\psi |^4\right)+H_{\mathrm{EM}},$$ (2) where $`\zeta `$, $`a`$, and $`b`$ are positive coefficients, $$g=2e/c,$$ (3) $`2e`$ is minus the electric charge of a Cooper pair ($`e>0`$), and $`c`$ is the speed of light; $`\mathrm{}=1`$ everywhere. We concentrate on the extreme type-II case; the corresponding condition on the parameters is $$g^2\zeta ^2b.$$ (4) The Hamiltonian of electromagnetic field is taken, for simplicity, in the relativistic form: $$H_{\mathrm{EM}}=\frac{1}{8\pi }d^3x\left(\text{E}^2+\text{B}^2\right).$$ (5) In (2), (5) $`\psi `$ is the complex “order parameter” field (it is not really an order parameter because it is not gauge-invariant ), A is the electromagnetic vector potential, E and B are the electric and magnetic fields. We first consider a superconducting film that extends from $`z=0`$ to $`z=d`$ in the $`z`$ direction and is periodic (toric) in the $`x`$ and $`y`$ directions. These periodic boundary conditions define what may be called a “mathematical” torus, as distinct from the surface of a physical torus, a “doughnut”, that one may produce in a laboratory. Later, we will discuss the distinction in more detail and will also consider the case when the entire solid torus is superconducting. The vortex of the theory (2) is a short (of length $`d`$) Abrikosov flux line whose axis is parallel to the $`z`$ axis. A vortex carries magnetic flux of $`2\pi /g=\pi c/e`$. So, if we break a Cooper pair and transport one of the electrons around the vortex, the wave function of the system will acquire a nontrivial Aharonov-Bohm factor of $`1`$. As shown in refs. , , whenever transport of local excitations around each other produces such a nontrivial factor, the ground state of the system on a torus is degenerate, up to an energy splitting decreasing exponentially with the system’s linear size. For the present case, it comes out that the ground state degeneracy on a torus is at least four-fold. We do not reproduce the argument here, as it can be found in the above papers. Besides, in our case the vacuum structure admits a semiclassical interpretation, which allows us to obtain all the requisite results in a different way. Consider classical vacua of (2), i.e. configurations of the lowest energy. On a torus, these are: A $`=`$ $`{\displaystyle \frac{2\pi }{g}}\left({\displaystyle \frac{n_x𝒆_x}{L_x}}+{\displaystyle \frac{n_y𝒆_y}{L_y}}\right),`$ (6) $`\psi `$ $`=`$ $`\psi _0\mathrm{exp}\left(2\pi in_xx/L_x2\pi in_yy/L_y\right),`$ (7) where $`n_x`$ and $`n_y`$ are arbitrary integers, $`𝒆_x`$ and $`𝒆_y`$ are unit vectors in the two directions, and $`L_x`$, $`L_y`$ are the corresponding dimensions of the torus; $$\psi _0=(a/2b)^{1/2}.$$ (8) The integers $`n_x`$ and $`n_y`$ are the winding numbers of the configuration: they count how many times the phase of $`\psi `$ winds as one travels along the torus’s noncontractible loops. We consider the case when $`L_x`$ and $`L_y`$ are comparable and assume, for definiteness, that $$L_x>L_y,$$ (9) i.e. that the larger loop of the torus is in the $`x`$ direction. Tunneling processes mix the perturbative vacua built near the configurations (6)–(7) into linear combinations, $`\theta `$-vacua, analogous to those of the four-dimensional QCD . If we denote the perturbative vacua as $`|n_x,n_y`$, the $`\theta `$-vacua are $$|\theta _x,\theta _y=\underset{n_x,n_y}{}\mathrm{exp}(i\theta _xn_x+i\theta _yn_y),$$ (10) where $`\theta _x`$ and $`\theta _y`$ run from 0 to $`2\pi `$. In this case we need two $`\theta `$ angles because there are two winding numbers, $`n_x`$ and $`n_y`$. A more important difference from QCD, though, is that in the present case the tunneling amplitudes, and hence the energy splittings among the $`\theta `$ vacua, are exponentially suppressed with $`L_x`$ or $`L_y`$. This exponential suppression was found in ref. in a slightly different context, see also ref. . It can be explained as follows. A typical tunneling fluctuation consists of a vortex and an antivortex, which travel along topologically distinct routes: the vortex travels distance $`\mathrm{\Delta }L`$, and the antivortex distance $`L_y\mathrm{\Delta }L`$ (if we consider transitions that change $`n_x`$). At least one of these distances is macroscopically large, and to travel that far the object has to move very fast, or to stay in existence for very long, or to achieve a good balance between these two extremes. One finds that even the fluctuation that achieves the optimal balance still has a Euclidean action proportional to $`L_y`$, resulting in an exponentially suppressed amplitude. In what follows we will assume that system is sufficiently large, so that the tunneling processes that change $`n_x`$ and $`n_y`$ are practically nonexistent. In this case, the linear combinations (10) are no longer special, and an equally good basis in the ground state subspace is provided by the perturbative vacua $`|n_x,n_y`$ built near the classical solutions (6)–(7). From the nontrivial properties of excitations, with respect to transport around each other, we have learned that, when tunneling is neglected, there are at least four degenerate ground states. Now we find infinitely many degenerate vacua $`|n_x,n_y`$. It is easy to make four from infinitely many. Note that the ground states $`|n_x,n_y`$ and $`|n_x+1,n_y`$ can be distinguished by breaking a Cooper pair and transporting one of the electrons around the torus in the $`x`$ direction. Say, for $`n_x=0`$ the electron will pick no phase factor, while for $`n_x=1`$ it will pick a factor of $`1`$. On the other hand, given that the charge of electron is the minimal charge in the system, there is no way to distinguish between $`|n_x,n_y`$ and $`|n_x+2,n_y`$. Similarly, one cannot distinguish between $`|n_x,n_y`$ and $`|n_x,n_y+2`$. So, in the absence of tunneling, instead of the infinitely many vacua $`|n_x,n_y`$ we may as well consider only four “equivalence classes”, corresponding to $`n_x`$ and $`n_y`$ both being even, one being even, the other odd, and both being odd, respectively. The four vacua deduced from the quantum numbers of the excitations are representatives of these four equivalence classes. Although, as we have seen, in the absence of tunneling we do not have to consider the entire infinite “lattice” of the vacua $`|n_x,n_y`$, sometimes it is convenient to do so. In particular, in the next section we will see that thermal fluctuations in the winding numbers are conveniently viewed as diffusion of $`n_x`$ and $`n_y`$ over an infinite lattice made by pairs of integers. Now consider a type-II film that sits on the surface of a solid torus, a “doughnut”, whose bulk is not superconducting. There are still two winding numbers, $`n_x`$ and $`n_y`$. For example, $`n_x`$ in this case is simply the total magnetic flux through the doughnut’s hole, in units of the flux quantum $`\mathrm{\Phi }_0=2\pi /g`$. One can change $`n_x`$ to $`(n_x+1)`$ by dragging an extra flux quantum from the outside, through the bulk of the doughnut. This is equivalent to creating a vortex and an antivortex on the outer side of the doughnut, transporting them along topologically distinct paths to the inner side, and annihilating them there, cf. ref. . In quantum theory, this process occurs spontaneously, as a quantum fluctuation. It is a tunneling process between two distinct ground states that differ by one unit of $`n_x`$. The conclusion that the tunneling rate is suppressed exponentially with $`L_y`$ (for transitions that change $`n_x`$) still applies. One can switch between the ground states “by hand”, i.e. by dragging appropriate fluxes with the help of external solenoids. Switching from $`|\psi _1`$ to $`|\psi _2`$, for a system that was initially in the linear superposition (1), is equivalent to interchanging $`c_1`$ and $`c_2`$. It is hard to say, though, if this “quantum switch” can serve any useful practical purpose. On a “doughnut”, the superconducting current in a ground state with $`n_x1`$ and $`n_y=0`$ is of order $`c\mathrm{\Phi }_0/`$, where $``$ is the self-inductance of the device: $`L_x\mathrm{ln}(L_x/L_y)`$; in what follows we assume the logarithm here to be of order one. The current density is then inversely proportional to the surface area of the sample. Because the current density is so low, ground states differing by a few units of $`n_x`$ or $`n_y`$ are practically indistinguishable with regard to energies of local excitations. This circumstance plays an important role in preservation of quantum coherence at finite temperature. On the doughnut, as opposed to the “mathematical” torus—a rectangle with periodic boundary conditions, the ground states corresponding to different values of $`n_x`$ and $`n_y`$ are not exactly degenerate even in classical theory: for different values of $`n_x`$ there are different amounts of energy associated with the magnetic field trapped in the doughnut’s hole. This energy has very little influence on the rate of topological transitions, so calculation of the rate can be carried out on the “mathematical” torus. On the other hand, a real device will be a “doughnut”, and in that case the magnetic energy will lead to discrete (labelled by $`n_x`$) energy levels. A resonator tuned to the energy difference between two such levels may then be able to write linear superpositions of quantum states to this device, or to a solid superconducting torus, which we discuss later. Estimates related to this writing technique are given in the concluding section. ## III Topological transitions at finite temperature Consider, for definiteness, two ground states that differ by one unit of $`n_x`$ and have the same $`n_y`$. These states may play the role of the basis states $`|\psi _1`$ and $`|\psi _2`$ forming the linear combination (1). When the system is at a finite temperature, excited states are also occupied. For low-lying, perturbative excited states, we can distinguish between states built near $`|\psi _1`$, let us call them $`|\psi _1^{(n)}`$, and those built near $`|\psi _2`$, call them $`|\psi _2^{(n)}`$; we neglect tunneling between these two sectors. It is essential that, for superconducting tori that we consider here, excitation energies are nearly the same for the two sectors, even when (as on a “doughnut”) there is a sizable difference between the corresponding ground state energies. If $`E_i^{(n)}`$ denotes the energy of $`|\psi _i^{(n)}`$, $`E_i^{(0)}`$ the energy of $`|\psi _i`$ ($`i=1,2`$), and the excitation energies are, to a good accuracy, equal: $$E_1^{(n)}E_1^{(0)}=E_2^{(n)}E_2^{(0)}\mathrm{\Delta }E^{(n)},$$ (11) there is a quasiequilibrium state described by the following density matrix: $$\rho =\underset{n}{}w_n|\psi ^{(n)}\psi ^{(n)}|,$$ (12) where $$|\psi ^{(n)}=c_1|\psi _1^{(n)}+c_2|\psi _2^{(n)},$$ (13) and $`w_n=𝒩\mathrm{exp}(\mathrm{\Delta }E^{(n)}/T)`$; $`𝒩`$ is a normalization factor, and $`T`$ is temperature. This state is no less quantum-coherent than the ground state (1). The density matrix (12) is not a fully equilibrium one, because it does not include occupation of higher excited states, those that cannot be regarded as being “near” either $`|\psi _1`$ or $`|\psi _2`$. Thermal occupation of these states will determine the rate of transitions between the two sectors. Our goal in this section is to set up formalism for calculating these transition rates. We begin with the case of a type-II film on the surface of a “doughnut”. Any two configurations that differ by one unit of $`n_x`$ or one unit of $`n_y`$ are separated by a potential barrier whose height is, to a good accuracy, twice the energy of a static vortex. In general, a system at finite temperature does not need to tunnel under a barrier; it can go over it as a result of a thermal fluctuation. In many cases, the rate of these thermal transitions can be computed by considering vicinity of the fluctuation corresponding to the top of the barrier . This fluctuation is called the critical fluctuation. For a toroidal type-II film, however, calculational schemes based on expanding near a critical fluctuation are completely useless, for the following reason. The top of the barrier in this case corresponds to a vortex and an antivortex separated by distance $`L_y/2`$ (for transitions that change $`n_x`$), see Fig. 1. But at a finite temperature there is a finite density of vortices and antivortices, with a typical distance between them that is much smaller than $`L_y/2`$. In this situation, a pair of widely separated vortex and antivortex cannot have any special significance. Accordingly, we expect that the rate of topological transitions will be determined by motion of vortices already populating the medium. Nucleation and annihilation of vortex-antivortex pairs will merely maintain the equilibrium concentrations of vortices and antivortices. In contrast to the two-dimensional case (film), a critical fluctuation can be readily identified at finite temperature in a solid superconducting torus, or a loop of thick superconducting wire. A solid type-II torus has multiple ground states, although not as many of them as a torus in which superconductivity is confined to the surface. Loops in the $`y`$ direction are contractible through superconductor, so there is no winding number $`n_y`$ that would correspond to those. But $`n_x`$ still exists and still counts the number of flux quanta trapped inside the loop. Changing $`n_x`$ by dragging a flux through the loop is still operational, but instead of a vortex-antivortex pair this procedure now creates one long Abrikosov flux line through the wire’s bulk. The top of the energy barrier is reached when the flux line is along a cross-sectional diameter of the wire, see Fig. 2. The energy of this critical flux line is $`E_0L_y`$. The rate of change in $`n_x`$ via thermal fluctuations is proportional to the Boltzmann factor $`\mathrm{exp}(E_0/T)`$ and thus decreases exponentially with $`L_y`$. At zero temperature, when spontaneous topological transitions have to be through tunneling, the suppression is even stronger: a tunneling path is now a worldsheet in the Euclidean spacetime, and the tunneling rate goes as an exponential of $`L_y^2`$. What we need for the case of a film is a definition of the rate of topological transitions that would make no mention of a critical fluctuation. This requirement is in fact familiar from studies of topological transitions in the electroweak theory, where depending on the temperature one may or may not have a critical fluctuation to expand about. A general definition of the rate in that case is obtained by considering topological transitions as diffusion (or random walk) of the winding number . The rate of the transitions is simply the diffusion rate. Here we construct a similar definition for toroidal superconducting film. As we already mentioned, to calculate the rate for the film it is sufficient to consider the “mathematical” torus, on which the classical vacua are given by (6)–(7) and are exactly degenerate. To describe diffusion of the winding numbers, we need to generalize their definition so that it will apply away from the vacuum configurations. This generalization is not unique, but the result for the rate will be the same as long as the newly defined winding numbers are equal to $`n_x`$ and $`n_y`$ on the classical vacua (6)–(7). A suitable definition is $`\alpha _x`$ $`=`$ $`{\displaystyle \frac{g}{2\pi L_yd}}{\displaystyle d^3xA_x},`$ (14) $`\alpha _y`$ $`=`$ $`{\displaystyle \frac{g}{2\pi L_xd}}{\displaystyle d^3xA_y}.`$ (15) Note that the winding numbers $`\alpha _x`$ and $`\alpha _y`$ are noninteger away from the classical vacua. Diffusion of $`\alpha _x`$, $`\alpha _y`$ is due to diffusive motion of vortices. We assume that the sample is homogeneous enough so that most of the vortices are not pinned. Translational motion of vortices is semiclassical, so we can define the diffusion rates from the classical equilibrium correlator $$[\alpha _x(t)\alpha _x(0)]^2=2\mathrm{\Gamma }_xt$$ (16) and a similar one for $`\alpha _y`$. The linear dependence on time on the right-hand side is characteristic of diffusion (in the absence of external forces), and $`\mathrm{\Gamma }_x`$ is the definition of the rate. Eq. (16) applies at times large compared to some microscopic time characterizing interactions of vortices with the heat bath. The precise meaning of the classical averaging in (16) is as follows. For each set of initial conditions (for the full fields A and $`\psi `$), we compute $`\alpha _x(0)`$, then evolve the system until time $`t`$, and compute $`\alpha _x(t)`$. The square of the difference is then averaged over all initial conditions, using the Boltzmann distribution for those. At this point, we should remember however that the system (2) is not isolated but evolves under the influence of a heat bath. The heat bath is comprised by all degrees of freedom not explicitly present in (2)—specifically, those associated with electrons. So, the requisite evolution equation includes a random (Langevin) force, and we need to average over realizations of that force as well. Before we proceed, it is convenient to recast the definition of the rate into a different form, which is more convenient for actual calculation. The procedure is completely standard. First, the left-hand side of (16) is trivially rewritten as $$_0^t𝑑t^{}_0^t𝑑t^{\prime \prime }\dot{\alpha }_x(t^{})\dot{\alpha }_x(t^{\prime \prime }).$$ (17) The correlator of time derivatives in (17) is an equilibrium correlator and thus depends only on the difference $`t^{}t^{\prime \prime }`$. We assume that the corresponding correlation time is finite (this assumption can be verified in our specific case). Then, at large $`t`$ the integral (17) is well approximated by $$t_{\mathrm{}}^{\mathrm{}}𝑑\tau \dot{\alpha }_x(\tau )\dot{\alpha }_x(0),$$ (18) which allows us to rewrite the definition (16) of the rate $`\mathrm{\Gamma }_x`$ as $$\mathrm{\Gamma }_x=\frac{1}{2}_{\mathrm{}}^{\mathrm{}}𝑑\tau \dot{\alpha }_x(\tau )\dot{\alpha }_x(0).$$ (19) As we will now show, the rate can be found explicitly by a simple calculation based on the picture of diffusing vortices. ## IV Calculation of the rate When a vortex crosses line $`y=b`$, the line integral $$C_b=_{y=b}A_x𝑑x$$ (20) changes by the amount of the vortex flux, i.e. $$C_bC_b\pm 2\pi /g,$$ (21) the sign depending on which direction the vortex is headed. If a vortex moves the entire length $`L_y`$ (in the $`y`$ direction), it crosses all such lines and changes $`\alpha _x`$, which is essentially the average of $`gC_b/2\pi `$ over $`b`$, by $`\pm 1`$. So, if a vortex moves a distance $`\mathrm{\Delta }y`$, it changes $`\alpha _x`$ by the amount $$\mathrm{\Delta }\alpha _x=\frac{\mathrm{\Delta }y}{L_y}.$$ (22) Taking into account all the vortices (of which there are $`N_v`$) and antivortices (of which there are $`N_a`$), we then obtain the time derivative of $`\alpha _x`$ as follows $$\dot{\alpha }_x=\frac{1}{L_y}\left(\underset{v=1}{\overset{N_v}{}}\dot{y}_v\underset{a=1}{\overset{N_a}{}}\dot{y}_a\right).$$ (23) We now substitute this expression into the formula (19) for the rate and assume that, because the vortices are well separated, the velocities of different vortices are uncorrelated. We obtain $$\mathrm{\Gamma }_x=\frac{N_v+N_a}{2L_y^2}_{\mathrm{}}^{\mathrm{}}𝑑\tau \dot{y}(\tau )\dot{y}(0).$$ (24) The correlator of velocities in (24) is computed using the equation of motion for a single vortex. We use a simple Langevin equation of the form $$M\ddot{\text{r}}+\eta \dot{\text{r}}=𝐟(t),$$ (25) where $`M`$ is the mass of a vortex, $`\eta `$ in the viscosity coefficient, and $`\text{f}(t)`$ is a random force, which we assume to be Gaussian white noise; r is the position vector of the vortex, $`\text{r}=(x,y)`$. The condition of applicability of (25) is that the response of the electronic subsystem to changes in $`\psi `$ and A is local; otherwise, there would be a nonlocal response kernel instead of the single coefficient $`\eta `$. The response is local when the mean-free path $`l_{\mathrm{tr}}`$ of the electrons is much smaller than the characteristic length scale from which $`\eta `$ receives the main contribution. As we will see in Appendix, the latter length scale is the coherence length of the superconductor $`\xi `$, so the condition of applicability of (25) is $$l_{\mathrm{tr}}\xi ,$$ (26) i.e. the superconductor should be sufficiently “dirty”. Calculation of $`\eta `$ had a long history and has eventually been achieved on the basis of microscopic theory . It is more or less straightforward, though, to obtain an estimate, so we present it here. (We assume that the condition (26) is satisfied.) A moving vortex will constantly transfer parts of its kinetic energy to the electrons, which they will dissipate in collisions with lattice impurities. There are two mechanisms of dissipation . One is Joule heat, which dissipates an amount $`\sigma E^2`$ of energy per unit time per unit volume; here $`\sigma `$ is the normal conductivity of the metal, and $`E`$ is the electric field created by the vortex motion. The other mechanism is related to response of the electrons to changes in the magnitude of $`\psi `$; it dissipates an amount of order $`a(_t\psi _0)^2\tau _{\mathrm{tr}}`$, where $`\tau _{\mathrm{tr}}`$ is the electronic mean-free time, and $`a`$ is the parameter from (2). These two amounts are typically of the same order of magnitude, except at temperatures close to critical, where the second amount is small. We estimate $`E^2`$ created by a moving vortex in Appendix. This allows us to estimate $`\eta `$ from $$\eta v^2\sigma d^3xE^2,$$ (27) where $`v`$ is the vortex speed. The vortex mass $`M`$ can be estimated from $$\frac{1}{2}Mv^2\frac{1}{8\pi }d^3xE^2.$$ (28) In Appendix, we find that the integrals in (27)–(28) are saturated at distances $`r\xi `$ from the vortex center. Curiously, in our final formula for the transition rate, $`\eta `$ and $`M`$ will appear only via the ratio $$\gamma =\eta /M\sigma .$$ (29) Note that this ratio grows with $`\sigma `$, i.e. it is larger in a purer metal (which is still “dirty”, though, in the sense of (26)). Physically, this is because electrons in a purer metal more readily accept energy from a moving vortex. From (25), it follows that $$\dot{y}(\tau )\dot{y}(0)=\dot{y}^2\mathrm{exp}(\gamma |\tau |),$$ (30) where $`\gamma =\eta /M`$, and $`\dot{y}^2`$ can be determined by equipartition: $$\frac{M}{2}\dot{y}^2=\frac{T}{2}.$$ (31) Assembling the pieces together, we obtain $$\mathrm{\Gamma }_x=\frac{T}{\eta }\frac{N_v+N_a}{L_y^2}.$$ (32) A striking feature of this result is that it does not contain any volume enhancement: although there is a macroscopic factor of $`(N_v+N_a)`$, it is essentially canceled out by the inverse powers of $`L_y`$. The physical reason behind this suppression is the extremely long time it takes a vortex to circumnavigate the torus: diffusion through a distance of order $`L_y`$ requires time of order $`L_y^2`$. The total number of vortices and antivortices is determined by the Boltzmann distribution: $`N_v+N_a`$ $`=`$ $`{\displaystyle \frac{2V}{(2\pi )^2}}{\displaystyle \mathrm{exp}[\beta (F_0+p^2/2M)]d^2p}`$ (33) $`=`$ $`{\displaystyle \frac{V}{\pi }}\mathrm{e}^{F_0/T}MT,`$ (34) where $`V=L_xL_y`$ is the total 2d volume and $`F_0`$ is the free energy required to create a vortex. Using $`F_0`$ instead of the vortex energy takes into account thermal population of the vortex’s internal states. Substituting (34) into (32), we finally obtain $$\mathrm{\Gamma }_x=\frac{MT^2}{\pi \eta }\frac{L_x}{L_y}\mathrm{e}^{F_0/T}.$$ (35) This is the rate of transitions that change $`\alpha _x`$. The rate of those that change $`\alpha _y`$ is obtained by interchanging $`L_x`$ and $`L_y`$. ## V Discussion As we have already mentioned, for superconducting film the exponential factor in (35) can be made practically as small as one wishes, because $`F_0`$ grows linearly with the film’s thickness. So, a thick film on the surface of a torus or, as the limiting case, a solid superconducting torus such as shown in Fig. 2 provide quantum memory that is stable against thermal fluctuations. We propose the following way to write to and read from this quantum memory. Because magnetic field trapped in the hole of a superconducting torus (or of any other shape with a noncontractible loop) carries energy, the torus behaves as a giant “atom”, in the sense that it has a discrete energy spectrum, with different levels corresponding to different values of $`n_x`$. We can write the absolute value of the energy difference between levels with $`n_x=n_10`$ and $`n_x=n_2>n_1`$ as $$\mathrm{}\omega =\frac{\mathrm{}^2c^2}{e^2R}(n_2^2n_1^2),$$ (36) where $`R`$ is of order of the linear size of the system (cf. Sect. 2) and may depend (presumably weakly) on $`n_1`$ and $`n_2`$. (We have restored $`\mathrm{}`$ in this formula.) The corresponding electromagnetic wavelength is $$\lambda =\frac{2\pi \alpha _{\mathrm{EM}}R}{n_2^2n_1^2},$$ (37) where $`\alpha _{\mathrm{EM}}`$ is the fine-structure constant. For $`R`$ of order of a few cm, and $`n_{1,2}1`$, the wavelength given by (37) is in the millimeter range. It is possible that one will be able to write a linear superposition of quantum states to this device by subjecting it to a pulse of radiation of frequency $`\omega `$ in a resonant cavity, similarly to how one induces Rabi precession in atoms. One may be able to read from this quantum memory by transferring the linear superposition to radiation field in a high-$`Q`$ cavity, as was done for atoms in the experiment of ref. . Unlike a single photon in a cavity or an excited state of an atom, the basis states in our case are macroscopically entangled, so this device will be able to store the linear superposition for a much longer time. If one wants to operate the read and write cavities at their principal resonant frequencies and use single-photon transitions, at least one of the dimensions of each cavity should be of order $`\lambda `$. We propose to use, as quantum memory, a loop of superconducting wire, such that the cross-sectional diameter of the wire is of order $`\lambda `$, while the size of the loop itself is large enough for $`R`$ to be on the order of centimeters. Only short arcs of the loop need to pass through the write and read cavities. This arrangement corresponds to $`L_y\lambda `$ in our formulas. We expect that the effective size of the interaction region, for interaction between cavity photons and the wire, is also of order $`\lambda `$, and hence of the same order as $`L_y`$. In this sense, the interaction is nonlocal, so writing time may be not exponentially large. At the same time, $`L_y`$ is still macroscopic, so the rate of thermal transitions changing $`n_x`$ is suppressed. We leave calculation of the rate of topological transitions induced by a radiation field for future work and turn, briefly, to systems with non-Abelian anyons. Theoretically, a diverse set of manipulations on degenerate states is available for some of these systems . It has been argued that non-Abelian anyons are realizable as excitations of the Pfaffian state . The latter is a quantum Hall state with a certain type of pairing correlation between electrons and is closely related to the state proposed in as a possible explanation of the experimentally observed $`\nu =5/2`$ Hall plateau. With non-Abelian anyons, nontrivial topology is not required for a sample to have degenerate ground states. It is sufficient to “puncture” the surface of the sample with a few localized excitations (vortices). If the typical distance $`L`$ between these vortices is macroscopic, one expects that the zero temperature tunneling between the ground states is suppressed exponentially with $`L`$ . At finite temperature, in addition to those carefully planted vortices there will be a sea of thermally excited ones. What will be the rate at which quantum memory deteriorates in this case? The exponential Boltzmann factor, like the one in (35), should still be present in the rate. In quantum Hall samples $`F_0`$ will be the larger of the free energy required to create a vortex and the free energy required to unpin it from lattice defects. Because these systems are intrinsically two-dimensional, one cannot increase $`F_0`$ at will. As an estimate of $`F_0`$, we can use the value of temperature corresponding to the onset of strong temperature dependence of diagonal resistivity. This value can be determined experimentally. According to ref. , it is 100 mK for the $`\nu =5/2`$ state described in that paper. The preexponential factor (prefactor) in the rate will be determined by motion of thermally excited vortices around a localized one. By analogy with the results of the present paper, we expect that a thermal vortex that is initially at distance $`\rho `$ from the localized one will contribute an amount of order $`1/\rho ^2`$ to the prefactor. Then, the prefactor will be proportional to $$\rho 𝑑\rho /\rho ^2\mathrm{ln}L^{},$$ (38) where $`L^{}`$ is either the linear size of the sample or the distance between the localized vortices, so the volume enhancement of the rate will be at most logarithmic. While this paper was being completed, we have learned about a recent proposal to use, as a basis for quantum computation, current-carrying states in superconducting loops with Josephson junctions. The authors of ref. propose to obtain linear superpositions of these basis states by modulating magnetic fluxes through the loops with pulses of external current. This technique may work also for the quantum memory device proposed here, i.e. one may be able to use an external current instead of a resonating cavity to change $`n_x`$. We plan to return to analysis of this possibility elsewhere. ## ACKNOWLEDGMENTS The author thanks T. Clark, S. Kivelson, S. Love, P. Muzikar, and M. Stone for discussions, and N. Giordano for pointing out ref. . This work was supported in part by the U.S. Department of Energy under Grant DE-FG02-91ER40681 (Task B). ## Electric field of a moving vortex Electric field produced by a moving vortex determines the vortex mass $`M`$ and the viscosity coefficient $`\eta `$. Here we will compute the electric field produced at large distances from the vortex core. We will learn in the process that the region away from the core is not where most of the energy associated with the electric field is concentrated. This precludes us from actually calculating the vortex mass, but we will obtain an order of magnitude estimate. We begin with a collection of formulas describing a static vortex, in notation close to that of ref. . The magnetic field of the vortex is in the $`z`$ direction. We consider the extreme type-II case when the penetration depth $`\delta `$ of magnetic field is much larger than the coherence length $`\xi `$. For the GL Hamiltonian (2), $`1/\delta ^2`$ $`=`$ $`2g^2\zeta \psi _0^2,`$ (39) $`1/\xi ^2`$ $`=`$ $`2a/\zeta .`$ (40) When distance $`r`$ from the center of the vortex is much larger than $`\xi `$, the magnetic field of a static vortex located at the origin is approximately $$B(x,y)=\frac{1}{g\delta ^2}K_0(r/\delta ),$$ (41) where $`r=(x^2+y^2)^{1/2}`$, and $`K_0`$ is the Macdonald function of the zeroth order. At $`r0`$, this magnetic field satisfies $$\delta ^2^2BB=0.$$ (42) We also recall that at small values of its argument $`K_0`$ is logarithmic: $`K_0(z)=\mathrm{ln}z+O(1)`$. Now suppose the vortex moves through the origin with velocity v, which lies in the $`x`$$`y`$ plane. The rate of change of the magnetic field is $$_tB(x,y)=\text{v}B(x,y).$$ (43) The changing magnetic field produces an electric field, which is related to $`_tB`$ via one of Maxwell’s equations, $`(\times \text{E})_z=_tB/c`$. The general solution to this equation in our case is $$\text{E}(x,y)=c^1(\text{v}\times 𝒆_z)B(x,y)+f(x,y),$$ (44) where $`f`$ is so far an arbitrary function. We fix $`f`$ from the condition that $`\text{E}=0`$. This condition expresses the absence of charge separation inside the material; we expect it to hold to a good accuracy because charge separation in a metal is associated with a large (plasmon) frequency gap. Using (42), we then obtain, at large distances from the core, $`\text{E}(x,y)`$ $`=`$ $`c^1(\text{v}\times 𝒆_z)B(x,y)`$ (46) $`+c^1\delta ^2(v_y_xv_x_y)B(x,y).`$ At small $`r`$, the first term here goes as $`\mathrm{ln}r`$, but the second term goes as $`1/r^2`$. When the second term dominates, $`E^2=v^2/(g^2c^2r^4)`$. Kinetic energy of the vortex is $$K=\frac{1}{2}Mv^2\frac{1}{8\pi }d^3xE^2.$$ (47) For the field (46), the integral in (47) diverges at small $`r`$, due to the singular second term in (46). This means that the main contribution to the mass comes from the core of the vortex, where (46) does not apply. Nevertheless, we can obtain an order of magnitude estimate for the mass by using (46) and cutting of the divergence at distances of the order of the core radius, $`r\xi `$. This gives $$M\frac{d}{e^2\xi ^2}\frac{H_{c2}d}{ec},$$ (48) where $`d`$ is the thickness of the film. The second estimate in (48) uses the upper critical field $`H_{c2}\mathrm{\Phi }_0/\xi ^2`$, where $`\mathrm{\Phi }_0=\pi c/e`$ is the flux quantum. From eq. (29), we can now obtain an estimate for the viscosity coefficient $`\eta `$: $$\frac{\eta }{d}\frac{\sigma H_{c2}}{ec},$$ (49) which is in agreement with the results of calculations based on microscopic theory .
no-problem/9909/gr-qc9909093.html
ar5iv
text
# Untitled Document The Apparent Fractal Conjecture<sup>1</sup><sup>1</sup>1Communication presented as part of the talk delivered at the “South Africa Relativistic Cosmology Conference in Honour of George F. R. Ellis 60th Birthday”, University of Cape Town, February 1-5, 1999. Marcelo B. Ribeiro Dept. Mathematical Physics, Institute of Physics, Federal University of Rio de Janeiro - UFRJ, C.P. 68528, Ilha do Fundão, Rio de Janeiro, RJ 21945-970, Brazil; E-mail: mbr@if.ufrj.br ABSTRACT > This short communication advances the hypothesis that the observed fractal structure of large-scale distribution of galaxies is due to a geometrical effect, which arises when observational quantities relevant for the characterization of a cosmological fractal structure are calculated along the past light cone. If this hypothesis proves, even partially, correct, most, if not all, objections raised against fractals in cosmology may be solved. For instance, under this view the standard cosmology has zero average density, as predicted by an infinite fractal structure, with, at the same time, the cosmological principle remaining valid. The theoretical results which suggest this conjecture are reviewed, as well as possible ways of checking its validity. PACS numbers: 98.80.-k 98.65.Dx 98.80.Es 05.45.Df The issue of whether or not the large-scale distribution of matter in the Universe actually follows a fractal pattern has divided cosmologists in the last decade or so, with the debates around this thorny question leading to a split of opinions between two main, and opposing, groups. On one side, the orthodox view sustains that since a fractal structure is inhomogeneous, it cannot agree with what we know about the structure and evolution of the Universe, as this knowledge is based on the cosmological principle and the Friedmann-Lemaître-Robertson-Walker (FLRW) spacetime, with both predicting homogeneity for the universal distribution of matter. Moreover, inasmuch as the cosmic microwave background radiation (CMBR) is isotropic, a result predicted by the FLRW cosmology, this group is, understandably, not prepared to give up the standard FLRW universe model and the cosmological principle, as that would mean giving up most, if not all, of what we learned about the structure and evolution of the Universe since the dawn of cosmology (Peebles 1993; Davis 1997; Wu, Lahav and Rees 1999; Martínez 1999). On the other side, the heterodox view claims that the systematic and, under their methodology, unbiased interpretation of astronomical data continuously shows that the distribution of galaxies is not homogeneous, being in fact completely inhomogeneous up to present observational scales, without any sign of homogenization. Therefore, this other group not only disputes the traditional interpretation of astronomical data, but also the validity of the cosmological principle, going as far as to implicitly suggest that the CMBR may be a cosmological puzzle (Pietronero 1987; Coleman and Pietronero 1992; Pietronero, Montuori and Sylos-Labini 1997; Sylos-Labini, Montuori and Pietronero 1998). To reach those opposing conclusions, the validity of the methods used by both sides of this “fractal debate” are, naturally, hotly disputed, and so far there has not yet been achieved a consensus on this issue. However, even if one is prone to part of the orthodox argument, i.e., that we cannot simply throw away some basic tenets of modern cosmology, like the cosmological principle and the highly successful FLRW cosmological model, when one looks in a dispassionate way at the impressive data presented by the heterodox group, one cannot dispel a certain uneasy feeling that something might be wrong in the standard observational cosmology: the results are consistent and agree with one another (Coles 1998). Another interesting aspect to note about this fractal debate is that it is as old as modern cosmology itself. The first suggestion that the Universe could be constructed in a hierarchical, or fractal, manner dates back to the very beginning of cosmology (Fournier D’Albe 1907; Charlier 1908, 1922), with contributions made even by Einstein (see Mandelbrot 1983, Ribeiro 1994, and references therein). Consequently, what we are witnessing now is only the latest chapter of this old debate, which is now focused on the statistical methods used by cosmologists to study galaxy clustering. The previous chapter was between de Vaucouleurs (1970ab) and Wertz (1970, 1971; see also de Vaucouleurs and Wertz 1971) on one side, and Sandage, Tammann, and Hardy (1972) on the other side, and was mainly focused on measurements of galaxy velocity fields and deviations from uniform expansion, a topic which has also resurfaced in the recent debate (Coles 1998). Therefore, it is clear that despite being dismissed many times as “unrealistic”, the fractal, or hierarchical, concept has so far refused to die, being able to pass from one generation of cosmologists to another (Oldershaw 1998). Thus, considering its old history, and its incredible ability to survive, it is perhaps premature to say that we are about to see this issue being settled with the dismissal of the fractal concept. From the brief summary above, it is clear that the two sides of the fractal debate are locked in antagonistic and self-excluding viewpoints. Nevertheless, it is the opinion of this author that this divide may be not as radical as presented by both sides, and that it is possible to build a bridge between both opinions, reconciling them by means of a change in perspective regarding how we deal with observations in cosmology. What I intend to show next is that there is already enough theoretical evidence to suggest that fractality can be accommodated within the standard cosmology, where it would appear as a real observational effect of geometrical nature, arising from the way we carry out observations of large-scale structure of the Universe. At the same time the cosmological principle, uniform Hubble expansion, CMBR isotropy, and well defined meanings for the cosmological parameters, such as $`\mathrm{\Omega }`$, can survive, together with the observational fractality obtained by the heterodox group mentioned above. This perspective has the advantage of preserving most of what we learned with the standard FLRW cosmology, and, at the same time, making sense of Pietronero and collaborators’ data, which, as seen above, can no longer be easily dismissed. The key point to understand how that can come about, is by re-discussing the meaning of observations in cosmology. In relativistic cosmology astronomical observations occur along the past light cone, a fact which is often overlooked when one carries out astronomical data reduction in cosmological models. Observers often use cosmological formulae which does not take this key theoretical feature into consideration. They are often under the assumption that at the scales where observations are being made ($`z<1`$) one can safely ignore that, since this is the region where the Hubble law is very linear. However, Hubble law linearity has a range which does not coincide with a constant density (see below), and the observed average density is a key physical quantity for fractal characterization (Pietronero 1987; Pietronero, Montuori and Sylos-Labini 1997; Coleman and Pietronero 1992; Sylos-Labini, Montuori and Pietronero 1998; Ribeiro and Miguelote 1998). The first theoretical evidence that average density departs from local homogeneity at much lower values for the redshift $`z`$ appeared in Ribeiro (1992b). There, observational relations were calculated along the past light cone for unperturbed Einstein-de Sitter cosmology, and it was clearly showed that deviations from local homogeneity start to occur at $`z0.04`$, becoming very strong at $`z0.1`$. A plot of the average density $`\rho `$ against the luminosity distance $`d_{\mathrm{}}`$ showed a continuous decrease in the average density, although not in a linear manner. Still, Ribeiro (1992b) showed too that in the Einstein-de Sitter cosmology the following limit holds, $$\underset{d_{\mathrm{}}\mathrm{}}{lim}\rho =0.$$ Many years ago, Wertz stated that a pure hierarchical cosmology ought to obey the “zero global density postulate: for a pure hierarchy the global density exists and is zero everywhere” (Wertz 1970, p. 18). Such a result was also speculated by Pietronero (1987) as a natural development of his fractal model. Therefore, what the above limit tells us is that the Einstein-de Sitter model does obey this key requirement of fractal cosmologies. In addition, the decay of the average density at increasing distances, another key aspect of a fractal model, is also obeyed by the Einstein-de Sitter cosmology. Notice that these two fractal features, present in all standard cosmological models (Ribeiro 1993, 1994), appear without any violation of the cosmological principle, linearity of Hubble law, and CMBR isotropy. Moreover, cosmological parameters such as $`q_0`$, $`\mathrm{\Omega }_0`$, $`H_0`$ still have their usual definitions and interpretations. What is clear from Ribeiro (1992b), and sequel papers (Ribeiro 1993, 1994), was that the homogeneity of the standard cosmological models is spatial, that is, it is a geometrical feature which does not necessarily translate itself into an astronomically observable quantity. Although a number of authors are aware of this fact, what came as a surprise had been the calculated low redshift value where this observational inhomogeneity appears. Therefore, it was clear by then that relativistic effects start to play an important role in observational cosmology at much lower redshift values than previously assumed. In another paper (Ribeiro 1995), the results above were further analysed and it became clear why there seems to be no contradiction between strong observational inhomogeneity and the linearity of the Hubble law for $`0.1z<1`$ in Einstein-de Sitter cosmology. Due to the non-linearity of the Einstein field equations, observational relations behave differently at different redshift depths. Consequently, while the linearity of the Hubble law is well preserved in the Einstein-de Sitter model up to $`z1`$, a value implicitly assumed as the lower limit up to where relativistic effects could be safely ignored, the observational density is strongly affected by relativistic effects at much lower redshift values. A power series expansion of these two quantities showed that while the zeroth order term vanishes in the distance-redshift relation, it is non-zero for the average density as plotted against redshift. This zeroth order term is the main factor for the different behaviour of these two observational quantities at small redshifts. Pietronero, Montuori and Sylos-Labini (1997) called this effect as the “Hubble-de Vaucouleurs paradox” (see also Baryshev et al. 1998). However, from the analysis presented by Ribeiro (1995) it is clear that this is not a paradox, but just very different relativistic effects on the observables at the moderate redshift range ($`0.1z<1`$). This effect explains why Sandage, Tammann and Hardy (1972) failed to find deviations from uniform expansion in a hierarchical model: they were expecting that such a strong inhomogeneity would affect the velocity field, but it is clear now that if we take a relativistic perspective for these effects they are not necessarily correlated at the range expected by Sandage and collaborators. Notice that de Vaucouleurs and Wertz also expected that their inhomogeneous hierarchical models would necessarily affect the velocity field, and change the linearity of the Hubble law at $`z<1`$, and once such a change was not observed by Sandage, Tammann, and Hardy (1972) it was thought that this implied an immediate dismissal of the hierarchical concept. Again, this is not necessarily the case if we take a relativistic view of those observational quantities. The results discussed above show that some key fractals features can already be found in the simplest possible standard cosmological model, that is, in the unperturbed Einstein-de Sitter universe. However, as the average density decay is not linear in this model, considering all these aspects we may naturally ask whether or not a perturbed model could turn the density decay at increasing redshift depths into a power law type decay, as predicted by the fractal description of galaxy clustering. If this happens, then standard cosmology can be reconciled with a fractal galaxy distribution. Notice that there are some indications that this is a real possibility, as Amendola (1999) pointed out that locally the cold dark matter and fractal models predict the same behaviour for the power spectrum, a conclusion apparently shared by Cappi et al. (1998). In addition, confirming Ribeiro’s (1992b, 1995) conclusions, departures from the expected Euclidean results at small redshifts were also reported by Longair (1995, p. 398), and the starting point for his findings was the same as employed by Ribeiro (1992b, 1995): the use of source number count expression along the null cone. Considering all results outlined above, I feel there is enough grounds to advance the following conjecture: the observed fractality of the large-scale distribution of galaxies should appear when observational relations necessary for fractal characterization are calculated along the past light cone in a perturbed metric of standard cosmology. If this conjecture proves, even partially, correct, fractals in cosmology would no longer be necessarily seen as opposed to the cosmological principle. Notice that this can only happen in circumstances where fractality is characterized by an observed, smoothed out, and averaged fractal system, as opposed to building a fractal structure in the very spacetime geometrical structure, as initially thought necessary to do for having fractals in cosmology (Mandelbrot 1983; Ribeiro 1992a). Thus, the usual tools used in relativistic cosmology, like the fluid approximation, will remain valid. As a possible consequence of this conjecture, a detailed characterization of the fractal structure could provide direct clues for the kind of cosmological perturbation necessary in our cosmological models, and this could shed more light in issues like galaxy formation. There is now underway an attempt to check the validity of this conjecture (Abdalla, Mohayaee, and Ribeiro 1999) by means of a specific perturbative approach to standard cosmology (Abdalla and Mohayaee 1999). Thanks go to E. Abdalla and R. Mohayaee for reading the original manuscript and for helpful comments and suggestions. Partial support from FUJB-UFRJ is also acknowledged. References Abdalla, E., and Mohayaee, R. 1999, Phys. Rev. D, 59, 084014, astro-ph/9810146 Abdalla, E., Mohayaee, R., and Ribeiro, M. B. 1999, astro-ph/9910003 Amendola, L. 1999, preprint (to appear in the Proceedings of the IX Brazilian School of Cosmology and Gravitation, M. Novello 1999) Cappi, A., Benoist, C., da Costa, L. N., and Maurogordato, S. 1998, A & A, 335, 779 Baryshev, Y. V., Sylos-Labini, F., Montuori, M., Pietronero, L., and Teerikorpi, P. 1998, Fractals, 6, 231, astro-ph/9803142 Charlier, C. V. L. 1908, Ark. Mat. Astron. Fys., 4, 1 Charlier, C. V. L. 1922, Ark. Mat. Astron. Fys., 16, 1 Coleman, P. H., and Pietronero, L. 1992, Phys. Rep., 213, 311 Coles, P. 1998, Nature, 391, 120 Davis, M. 1997, Critical Dialogues in Cosmology, N. Turok, Singapore: World Scientific, 1997, 13, astro-ph/9610149 de Vaucouleurs, G. 1970a, Science, 167, 1203 de Vaucouleurs, G. 1970b, Science, 168, 917 de Vaucouleurs, G., and Wertz, J. R. 1971, Nature, 231, 109 Fournier D’Albe, E. E. 1907, Two New Worlds: I The Infra World; II The Supra World, London: Longmans Green Longair, M. S. 1995, The Deep Universe, Saas-Fee Advanced Course 23, B. Binggeli and R. Buser, Berlin: Springer, 1995, 317 Mandelbrot, B. B. 1983, The Fractal Geometry of Nature, New York: Freeman Martínez, V. J. 1999, Science, 284, 445 Oldershaw, R. L. 1998, http://www.amherst.edu/~rlolders/loch.htm Peebles, P. J. E. 1993, Principles of Physical Cosmology, Princeton University Press Pietronero, L. 1987, Physica A, 144, 257 Pietronero, L., Montuori, M., and Sylos-Labini, F. 1997, Critical Dialogues in Cosmology, N. Turok, Singapore: World Scientific, 1997, 24 Ribeiro, M. B. 1992a, Ap. J. 388, 1 Ribeiro, M. B. 1992b, Ap. J. 395, 29 Ribeiro, M. B. 1993, Ap. J., 415, 469 Ribeiro, M. B. 1994, Deterministic Chaos in General Relativity, D. Hobbil, A. Burd, and A. Coley, New York: Plenum Press, 1994, 269 Ribeiro, M. B. 1995, Ap. J., 441, 477, astro-ph/9910145 Ribeiro, M. B., and Miguelote, A. Y. 1998, Brazilian J. Phys., 28, 132, astro-ph/9803218 Sandage, A., Tammann, G. A., and Hardy, E. 1972, ApJ, 172, 253 Sylos-Labini, F., Montuori, M., and Pietronero, L. 1998, Phys. Rep., 293, 61, astro-ph/9711073 Wertz, J. R. 1970, Newtonian Hierarchical Cosmology, PhD thesis (University of Texas at Austin, 1970) Wertz, J. R. 1971, Ap. J., 164, 227 Wu, K. K. S., Lahav, O., and Rees, M. J. 1999, Nature, 397, 225, astro-ph/9804062
no-problem/9909/astro-ph9909154.html
ar5iv
text
# Yaroslavl State University Preprint YARU-HE-99/06 astro-ph/9909154 On the Possible Enhancement of the Magnetic Field by Neutrino Reemission Processes in the Mantle of a Supernova ## Abstract URCA neutrino reemission processes under the conditions in the mantle of a supernova with a strong toroidal magnetic field are investigated. It is shown that parity violation in these processes can be manifested macroscopically as a torque that rapidly spins up the region of the mantle occupied by such a field. Neutrino spin-up of the mantle can strongly affect the mechanism of further generation of the toroidal field, specifically, it can enhance the field in a small neighborhood of the rigid-body-rotating core of the supernova remnant. PACS numbers: 97.60.Bw, 95.30.Cq The problem of the shedding of the mantle in an explosion of a type-II supernova is still far from a complete solution . It is known that in several seconds after the collapse of a presupernova an anomalously high neutrino flux with typical luminosities $`L10^{52}`$ ergs/s is emitted from the neutrinosphere, which is approximately of the same size as the remnant core . In principle, such a neutrino flux could initiate a process leading to the shedding of the mantle as a result of the absorption and scattering of neutrinos by nucleons and the $`e^+e^{}`$ plasma of the medium . However, detailed calculations in spherically symmetric collapse models have shown that such processes are too weak for mantle shedding . In a magnetorotational model , mantle shedding is initiated by the outward pressure of a strong toroidal magnetic field generated by the differential rotation of the mantle with the core’s primary poloidal magnetic field ”frozen” into it. Indeed, as calculations show , when a mantle rotates with a millisecond period in a poloidal field $`B_010^{12}10^{13}`$ G a toroidal field $`B10^{15}10^{16}`$ G is generated in a time of order of a second. We note that the model in Ref. contains a fundamental limitation on the energy of the toroidal field (it cannot exceed the kinetic energy of the ”core + shell” system) and therefore on the maximum field itself. In the present letter we investigate the possibility that this magnetic field ”frozen in” the mantle is enhanced as the result of elementary neutrino reemission processes occurring in the mantle. We assume the mantle in the vicinity of the neutrinosphere to be a hot ($`T`$ several MeV) and quite dense (though transparent to neutrinos, $`\rho 10^{11}10^{12}`$ g/cm<sup>3</sup>) medium consisting of free nucleons and $`e^+e^{}`$ plasma. Under these conditions the dominant neutrino reemission processes are the URCA processes: $`p+e^{}n+\nu _e,`$ (1) $`n+e^+p+\stackrel{~}{\nu }_e,`$ (2) $`n+\nu _ep+e^{},`$ (3) $`p+\stackrel{~}{\nu }_en+e^+.`$ (4) We note that $`\beta `$ decay is statistically suppressed in such a medium. The basic idea of this letter is as follows. In an external magnetic field, neutrinos are emitted and absorbed asymmetrically with respect to the direction of the magnetic field as a result of the parity violation in the processes (1)–(4. Therefore a macroscopic torque spinning up the mantle can arise in a toroidal field. It is known that for an equilibrium neutrino distribution function such a neutrino-recoil momentum must be zero. However, the supernova region under consideration is nonequilibrium for neutrinos, so that the torque that arises in it is different from zero. Moreover, as we shall show below, the torque can be large enough to change substantially the distribution of the angular rotational velocities of the mantle in the region filled with a strong magnetic field over the characteristic neutrino emission times. According to the equation governing the generation of a toroidal field , a large change in the gradient of the angular velocities in the region can lead to redistribution of the magnetic field (specifically, enhancement of the field in a small neighborhood of the rigid-body-rotating core). A quantitative estimate of the effect follows from the expression for the energy-momentum transferred by neutrinos to a unit volume of the mantle per unit time: $`{\displaystyle \frac{dP_\alpha }{dt}}={\displaystyle \frac{1}{V}}{\displaystyle \underset{i}{}dn_if_i\underset{f}{}dn_f\left(1f_f\right)\frac{\left|S_{if}\right|^2}{𝒯}k_\alpha },`$ (5) where $`dn_i`$ and $`dn_f`$ are the number of initial and final states in an element of the phase space, $`f_i`$ and $`f_f`$ are the distribution functions of the initial and final particles, $`k_\alpha `$ is the neutrino momentum, $`|S_{if}|^2/𝒯`$ is the squared $`S`$-matrix element per unit time. It is of interest to calculate the latter under our conditions, since, as far as we know, the URCA processes (1)–(4) have been previously studied for relatively weak ($`Bm_e^2/e`$) and very strong ($`Bm_p^2/e`$) fields. Assuming that electrons and positrons in the plasma mainly occupy only the lowest Landau level ($`\mu _e\sqrt{2eB}`$, where $`\mu _e`$ is the chemical potential of electrons), we obtained the following expression for the squared $`S`$-matrix element summed over all proton Landau levels and polarizations of the final particles and averaged over the polarizations of the initial particles: $`\left|S_{if}\right|^2`$ $`=`$ $`{\displaystyle \frac{G_F^2\mathrm{cos}^2\theta _c\left(2\pi \right)^3𝒯}{2L_yL_zV^2}}{\displaystyle \frac{\mathrm{exp}\left(Q_{}^2/2eB\right)}{4\omega \epsilon }}`$ $`\times `$ $`{\displaystyle \frac{1}{2}}\left[{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{\left|M_+\right|^2}{n!}}\left({\displaystyle \frac{Q_{}^2}{2eB}}\right)^n\delta ^{\left(3\right)}+{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{\left|M_{}\right|^2}{\left(n1\right)!}}\left({\displaystyle \frac{Q_{}^2}{2eB}}\right)^{n1}\delta ^{\left(3\right)}\right],`$ $`\left|M_\sigma \right|^2`$ $`=`$ $`4\left(1+g_a\sigma \right)^2\left[2\left(up\right)\left(uk\right)\left(pk\right)\left(up\right)\left(k\stackrel{~}{\phi }u\right)\left(uk\right)\left(p\stackrel{~}{\phi }u\right)\right]`$ $`+`$ $`8g_a^2\left(1+\sigma \right)\left[\left(pk\right)\left(p\stackrel{~}{\phi }k\right)\right],`$ where $`u_\alpha `$ is a four-velosity of a medium, $`𝐁=(0,0,B)`$, and $`\delta ^{(3)}`$ is the production of the energy delta-function and two delta-functions of the momentum in the direction of the magnetic field and one transverse momentum component which are conserved in the reactions; $`(pk)=\epsilon \omega p_3k_3`$, $`p^\alpha =(\epsilon ,𝐩)`$ and $`k^\alpha =(\omega ,𝐤)`$ are the four-momenta of the electron and neutrino, $`Q_{}^2`$ is the square of the momentum transfer transverse to the to the magnetic field in the reactions (1) and (3) \[with the corresponding substitutions $`pp`$ and $`kk`$ in the crossing reactions (2) and (4)\], $`\stackrel{~}{\phi }_{\alpha \beta }=\stackrel{~}{F}_{\alpha \beta }/B`$ is the dimensionless dual magnetic-field tensor, $`\sigma =\pm 1`$ is the projection of the proton spin on the direction of the magnetic field, $`n`$ is the summation index over the proton Landay levels, $`𝒯V=𝒯L_xL_yL_z`$ is the normalization four-volume, $`g_a`$ is the axial constant of the nucleonic current, $`G_F`$ is the Fermi constant, and $`\theta _c`$ is the Cabibbo angle. We note that in the limit of a strong magnetic field, when the protons occupy only the ground Landay level, expressions ( Yaroslavl State University Preprint YARU-HE-99/06 astro-ph/9909154 On the Possible Enhancement of the Magnetic Field by Neutrino Reemission Processes in the Mantle of a Supernova) and ( Yaroslavl State University Preprint YARU-HE-99/06 astro-ph/9909154 On the Possible Enhancement of the Magnetic Field by Neutrino Reemission Processes in the Mantle of a Supernova) agree with the result obtained previosly in Ref. . Analysis shows that URCA processes are the fastest reactions in medium considered, and they transfer the medium into a state of $`\beta `$ equilibrium in a time of order $`10^2`$ s. Therefore we employed the condition of $`\beta `$ equilibrium and singled out in the expression for the energy-momentum transfer to the shell (5) the separate contributions from processes involving neutrinos (1), (3) and antineutrinos (2), (4): $`{\displaystyle \frac{dP_\alpha ^{(\nu ,\stackrel{~}{\nu })}}{dt}}={\displaystyle \frac{d^3k}{\left(2\pi \right)^3}k_\alpha \left[1+\mathrm{exp}\left(\frac{\omega +\mu _{(\nu ,\stackrel{~}{\nu })}}{T}\right)\right]𝒦^{(\nu ,\stackrel{~}{\nu })}\delta f^{(\nu ,\stackrel{~}{\nu })}}.`$ (8) Here, $`\delta f^{(\nu ,\stackrel{~}{\nu })}`$ is the deviation of the distribution function from the equilibrium function, $`𝒦^{(\nu ,\stackrel{~}{\nu })}`$ is the (anti)neutrino absorption coefficient, defined as $`𝒦^{(\nu ,\stackrel{~}{\nu })}={\displaystyle \underset{i}{}dn_if_i\underset{f}{}dn_f\left(1f_f\right)\frac{\left|S_{if}\right|^2}{𝒯}},`$ (9) where the integration extends over all states except the neutrino states in the reaction (3) and antineutrino states in the reaction (4), respectively. As follows from Eq. (8), actually, the momentum transferred to the medium is different from zero only if the neutrino distribution function deviates from the equilibrium distribution. To calculate the absorption coefficient $`𝒦^{(\nu ,\stackrel{~}{\nu })}`$ we assumed that the ultrarelativistic $`e^+e^{}`$ plasma occupies only the ground Landau level, while the protons occupy quite many levels (the dimensionless parameter $`\delta =eB/m_pT1`$). We also used the fact that at the densities under consideration the nucleonic gas is Boltzmannian and nonrelativistic. Then, dropping terms $`\delta `$, we can write expression (9) in the form $`𝒦^{(\nu ,\stackrel{~}{\nu })}`$ $`=`$ $`{\displaystyle \frac{G_F^2\mathrm{cos}^2\theta _ceBN_{(n,p)}}{2\pi }}\left[\left(1+3g_a^2\right)\left(g_a^21\right)k_3/\omega \right]`$ (10) $`\times `$ $`\left[1+\mathrm{exp}\left({\displaystyle \frac{\pm \left(\mu _e\mathrm{}\right)\omega }{T}}\right)\right]^1,`$ where $`N_n`$, $`N_p`$, and $`m_n`$, $`m_p`$ are the number densities and masses of the neutrons and protons, respectively, $`\mathrm{}=m_nm_p`$, and $`\omega `$ and $`k_3`$ are the neutrino energy and the neutrino momentum in the direction of the magnetic field, respectively. For further calculations we employed the neutrino distribution function in the model of a spherically symmetric collapse of a supernova in the absence of a magnetic field . This is a quite good approximation when the region occupied by the strong magnetic field is smaller than or of the order of the neutrino mean-free path. By the strong field we mean the field in which $`e^+e^{}`$ plasma occupies only the ground Landau level: $`eB\mu _e^2`$. In the model of Ref. the region occupied by such a field is no greater than several kilometers in size, and we estimate the neutrino mean-free path in this region as $`l_\nu 4\mathrm{km}\left({\displaystyle \frac{4.4\times 10^{16}\mathrm{G}}{B}}\right)\left({\displaystyle \frac{5\times 10^{11}\mathrm{g}/\mathrm{cm}^3}{\rho }}\right).`$ (11) Therefore the magnetic field cannot strongly alter the neutrino distribution function, and our approximation is quite correct. As calculations of the components of the energy-momentum (8) transferred to the medium during neutrino reemission showed, the radial force arising is much weaker than the gravitational force and cannot greatly influence the mantle dynamics. However, the force acting in the direction of the magnetic field can change quite rapidly the distribution of the angular velocities in the region occupied by the strong magnetic field. The density of this force can be represented as $`\mathrm{}_{}^{\left(tot\right)}=\mathrm{}_{}^{\left(\nu \right)}+\mathrm{}_{}^{\left(\stackrel{~}{\nu }\right)}=𝒩\left[\left(3\mu ^2_\nu 1\right)I\left(a\right)e^a+\left(3\mu ^2_{\stackrel{~}{\nu }}1\right)I\left(a\right)\right],`$ (12) $`\mu ^2=\left({\displaystyle \mu ^2\omega fd^3k}\right)\left({\displaystyle \omega fd^3k}\right)^1,`$ where $`\mu `$ is the cosine of the angle between the neutrino momentum and the radial direction, $`a=\mu _e/T`$, and $`I\left(a\right)={\displaystyle \underset{0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{y^3dy}{\mathrm{e}^{ya}+1}}.`$ In deriving Eq. (12) we use the one-dimensional factorized neutrino distribution function $`f^{(\nu ,\stackrel{~}{\nu })}=\varphi ^{(\nu ,\stackrel{~}{\nu })}(\omega /T_\nu )\mathrm{\Phi }^{(\nu ,\stackrel{~}{\nu })}(r,\mu )`$ , where $`T_\nu `$ is the neutrino spectral temperature and $`r`$ is the distance from the core center. To estimate the force in the diffusion region we assumed that $`T_\nu T`$ and chose $`\varphi ^{(\nu ,\stackrel{~}{\nu })}(\omega /T_\nu )=\mathrm{exp}(\omega /T_\nu )`$. We determined the dimensional parameter $`𝒩`$ in expression (12) as $`𝒩`$ $`=`$ $`{\displaystyle \frac{G_F^2\mathrm{cos}^2\theta _c}{\left(2\pi \right)^3}}{\displaystyle \frac{g_a^21}{3}}eBT^4N_N`$ $``$ $`4.5\times 10^{20}{\displaystyle \frac{\mathrm{dynes}}{\mathrm{cm}^3}}\left({\displaystyle \frac{T}{5\mathrm{MeV}}}\right)^4\left({\displaystyle \frac{B}{4.4\times 10^{16}\mathrm{G}}}\right)\left({\displaystyle \frac{\rho }{5\times 10^{11}\mathrm{g}/\mathrm{cm}^3}}\right),`$ where $`N_N=N_n+N_p`$ is the total nucleon number density. The force (12) was estimated numerically in the diffusion region of the supernova atmosphere for typical (excluding the field) values of the macroscopic parameters for this region: $`T=5`$ MeV, $`B=4.4\times 10^{16}`$ G, $`\rho =5\times 10^{11}`$ g/cm<sup>3</sup>. For these values $`a3`$, $`\mu ^2_\nu \mu ^2_{\stackrel{~}{\nu }}0.4`$ (Ref. ), and the force density in the direction of the field can be estimated from Eq. (12) as $`\mathrm{}_{}^{tot}\mathrm{}_{}^\nu 𝒩.`$ (14) We note that the angular acceleration produced by the torque exerted by such a force is large enough to spin up the region of the mantle containing a strong magnetic field to typical angular velocities of a fast pulsar (with the rotational period $`P_010^2`$ s) in a characteristic time of order of a second. In our opinion, this result is of interest in itself and can serve as a basic for a number of applications. However, we shall give a qualitative discussion of only one possible manifestation of this result – the effect of such a fast spin-up of the mantle on the further generation of the toroidal magnetic field. Indeed, if the modification of the gradient of the angular velocities of the mantle is large, the toroidal magnetic field in the mantle at subsequent times will vary according to a law that is different from the linear law . Analysis of the equation governing the generation of a toroidal field with allowance for the force (14), which is linear in this field, leads to the conclusion that its growth in time is much faster (exponential) in the quite small region in which the force acts ($`eB\mu _e^2`$). However, the main source of the magnetic field energy, just as in the case when there is no force, is the kinetic energy of the rigid-body-rotating core. Thus the force (14) can lead to a peculiar rearrangement of the region occupied by the strong field. Specifically, with virtually no change in energy, the magnetic field can become concentrated in a narrower spatial region and can therefore have in this region higher intensities, on average, than in the absence of the spin-up force. We note that the effect under discussion can strongly influence the mantle-shedding process and also the mechanism leading to the formation of an anisotropic $`\gamma `$-ray burst in the explosion of a supernova with a rapidly rotating core . However, in order to perform detailed calculations of the generation of a toroidal field, the enhancement of the field due to the ”neutrino spin-up”, and the effect of this enhancement on the indicated processes, it is necessary to analyze the complete system of MHD equations. That analysis lies far outside the scope of the present work and is a subject of a separate investigation. Qualitative estimates show that the magnetic field with the strength $`B10^{17}`$ G can be generated by the above-described mechanism in a small neighborhood (of order a kilometer) of a rapidly rotating core with the period $`P_05\times 10^3`$ s, and this region decreases with increasing the period. In summary, we have shown that URCA neutrino reemission processes can produce, in the region of the mantle that is filled with a strong toroidal magnetic field, angular accelerations which are sufficiently large as to greatly influence the mechanism of further generation of the field. Specifically, such rapid redistribution of the angular velocities can enhance the field in a small neighborhood of the rigid-body-rotating core of a remnant. We are grateful to S. I. Blinnikov for fruitful discussions and for assistance in formulating the problem and to N. V. Mikheev and M. V. Chistyakov for helpful discussions. This work was partially supported by INTAS (Grant No. 96-0659) and the Russian Foundation for Basic Research (Grant No. 98-02-16694).
no-problem/9909/math9909106.html
ar5iv
text
# Napoleon in isolation ## 1. Geometric isolation ### 1.1. Definition Let $`M`$ be a complete, finite volume hyperbolic $`3`$-manifold with $`n`$ torus cusps, which we denote $`c_1,\mathrm{}c_n`$. The following definition is found in : ###### Definition 1.1. A collection of cusps $`c_{j_1},\mathrm{}c_{j_m}`$ is geometrically isolated from a collection $`c_{i_1},\mathrm{}c_{i_n}`$ if any hyperbolic Dehn surgery on any collection of the $`c_{i_k}`$ leaves the geometric structure on all the $`c_{j_l}`$ invariant. Note that this definition is not symmetric in the collections $`c_{i_k}`$ and $`c_{j_l}`$, and in fact there are examples which show that a symmetrized definition is strictly stronger (see ). More generally, we can ask for some prescribed set of fillings on the $`c_{i_k}`$ which leave the $`c_{j_l}`$ invariant. Generalized (non-integral) hyperbolic surgeries on a cusp are a holomorphic parameter for the space of all (not necessarily complete) hyperbolic structures on $`M`$ with a particular kind of allowable singularities (i.e. generalized cone structures) in a neighborhood of the complete structure. Moreover, the complex dimension of the space of geometric shapes on a complete cusp is $`1`$. Consequently, for dimension reasons whenever $`n>m`$ there will be families of generalized surgeries leaving the geometric structures on the $`c_{j_l}`$ invariant. There is no particular reason to expect, however, that any of these points will correspond to an integral surgery on the $`c_{i_k}`$. When there is a $`1`$ complex dimensional holomorphic family of isolated generalized surgeries which contains infinitely many integral surgeries, we say that we have an example of an isolation phenomenon. Neumann and Reid describe other qualities of isolation in including the following: ###### Definition 1.2. A collection of cusps $`c_{j_1}\mathrm{}c_{j_m}`$ is strongly isolated from a collection $`c_{i_1}\mathrm{}c_{i_m}`$ if after any hyperbolic Dehn surgeries on any collection of the $`c_{j_l}`$, a further surgery on any collection of the $`c_{i_k}`$ leaves the geometry of the (possibly filled) cusps $`c_{j_l}`$ invariant. A collection of cusps $`c_{j_1}\mathrm{}c_{j_m}`$ is first-order isolated from a collection $`c_{i_1}\mathrm{}c_{i_m}`$ if the derivative of the deformation map from generalized surgeries on the $`c_{i_k}`$ to the space of cusp shapes on the $`c_{j_l}`$ vanishes at the complete structure. By using the structure of the $`\mathrm{\Phi }`$ function defined in , Neumann and Reid show that strong isolation and first order isolation are symmetric relations. First order isolation can be restated in terms of group cohomology, and is studied in some papers of Kapovich, notably . In this paper we produce new constructions of isolation phenomena of various qualities, both by extending or modifying known constructions, and by introducing a conceptually original construction based on Napoleon’s theorem in plane geometry. The sheer wealth of examples that these techniques can produce, used in combination, strongly suggests that instances of isolation phenomena are not isolated phenomena. ### 1.2. Holomorphic rigidity Suppose we want to show that a cusp $`d`$ is isolated from a cusp $`c`$. Then since the shape of the complete cusp $`d`$ depends holomorphically on the generalized surgery on $`c`$, it suffices to show that infinitely many surgeries on $`c`$ keep the structure of $`d`$ fixed, since these can only accumulate at the complete (unfilled) structure, where we know the function relating the structure on $`d`$ to the surgery on $`c`$ is regular. We describe this holomorphic structure in more detail. Choose a meridian $`m`$ of a cusp $`c`$. Let $`M_{(p,q)}`$ be obtained by doing $`(p,q)`$ surgery on $`c`$. The hyperbolic structure on $`M_{(p,q)}`$ determines a representation $`\rho _{p,q}:\pi _1(M)PSL(2,)`$. The image of $`[m]`$ under this representation has a well–defined complex length $`u`$ in $`/2\pi i`$, which is the logarithm of the ratio of the eigenvalues of $`\rho _{p,q}(m)`$. We may choose a branch of the logarithm so that the value $`0`$ corresponds to the complete structure. Then small deformations of the hyperbolic structure on $`M`$ corresponding to generalized surgeries on $`c`$ are parameterized in a $`2:1`$ way by $`u`$. The set of Euclidean structures on a complete cusp $`d`$ are parameterized by the complex orbifold $`_1=^2/PSL(2,)`$, and it is well–known that the map from $`u_1`$ is analytic, and regular at $`0`$. For more detail, see . ### 1.3. Rigid orbifolds A systematic study of isolation was initiated in . Most of the examples constructed in of pairs of cusps $`c_1,c_2`$ which are geometrically isolated have the property that there is a totally geodesic rigid triangle orbifold separating the two cusps. Such a separating surface splits the manifold $`M`$ into two pieces $`M_1,M_2`$ where $`c_i`$ sits as a cusp in $`M_i`$. Then a surgery on $`c_i`$ deforms only the piece $`M_i`$, keeping the geometry of the splitting orbifold unchanged, and $`M_i`$ can then be glued to $`M_{i+1}`$ to produce a complete hyperbolic structure on $`M`$. One sees that the geometry of the entire piece containing the unfilled cusp is unchanged by this operation, and therefore that each of the two cusps is isolated from the other cusp. If $`M`$ covers an orbifold $`N`$ containing a totally geodesic triangle orbifold which separates cusps $`c`$ and $`d`$, then in $`M`$ any surgeries on lifts of $`c`$ which descend to $`N`$ will leave unaffected the structures on lifts of $`d`$. ### 1.4. Rigid cusps A refinement of the construction above comes when the rigid orbifold is boundary parallel. The square torus $`^2/i`$ and the hexagonal torus $`^2/\frac{1+\sqrt{3}i}{2}`$ have rotational symmetries of order $`4`$ and $`6`$ respectively. If these symmetries about a cusp $`c`$ can be made to extend over the entire manifold $`M`$, then any surgery preserving these symmetries will keep the cusp $`c`$ square or hexagonal respectively. Note that isolation phenomena produced by this method tend to be one–way. ###### Example 1.1. Let $`M=T^2\times IK`$ where $`K`$ is a knot which has $`4`$–fold rotational symmetry, as seen from either of the $`T^2`$ ends of $`T^2\times I`$. Then $`M`$ has $`4`$–fold rotational symmetry which is preserved after $`(p,q)`$ filling on the knot $`K`$. This symmetry keeps the shape of the two ends of $`T^2\times I`$ square after every surgery on $`K`$. In general, surgery on either of the ends of the $`T^2`$ will disrupt the symmetry and not exhibit any isolation. ### 1.5. Mutation An incompressible surface in a hyperbolic manifold does not need to be rigid for certain topological symmetries to be realized as geometric symmetries. In a finite volume hyperbolic $`3`$–manifold, an incompressible, $``$–incompressible surface $`S`$ without accidental parabolics which is not the fiber of a fibration over $`S^1`$ is quasifuchsian, and therefore corresponds to a unique point in $`Teich(S)\times Teich(\overline{S})`$. For surfaces $`S`$ of low genus, the tautological curve over the Teichmüller space of $`S`$ has certain symmetries which restrict to a symmetry of each fiber — that is, of each Riemann surface topologically equivalent to $`S`$. There is a corresponding symmetry of the universal curve over $`Teich(\overline{S})`$ and therefore of a quasifuchsian representation of $`S`$. Geometrically, this means that one can cut along a minimal surface representing the class of $`S`$ and reglue it by an automorphism which preserves the geometric structure to give a complete nonsingular hyperbolic structure on a new manifold. Actually, one does not need to know there is an equivariant minimal surface along which one can cut — one can perform the cutting and gluing at the level of limit sets by using Maskit’s combinations theorems (). This operation is called mutation and is studied extensively by Ruberman, for instance in . If $`S`$ is a sphere with $`4`$ punctures, we can think of $`S`$ as a Riemann sphere with $`4`$ points deleted. After a Möbius transformation, we can assume $`3`$ of these points are at $`0,1,\mathrm{}`$ and the $`4`$th is at the complex number $`z`$. The full symmetry group $`𝖲_4`$ does not act holomorphically on $`S`$ except in very special cases, but the subgroup consisting of $`(12)(34)`$ and its conjugates does act holomorphically. This group is known as the Klein $`4`$–group. Geometrically, where we find an appropriate $`4`$–punctured sphere in a hyperbolic manifold $`M`$, we can cut along the sphere and reglue after permuting the punctures. This operation leaves invariant the geometric structure on those pieces of the manifold that do not meet the sphere. If $`S`$ is a spherical orbifold or cone manifold with $`4`$ equivalent cone points, we can similarly cut and reglue. The sphere along which the mutation is performed is called a Conway sphere. The observation that mutation can be performed for spherical orbifolds is standard: one can always find a finite manifold cover of any hyperbolic orbifold, by Selberg’s lemma. The spherical orbifold lifts to an incompressible surface in this cover, and one can cut and reglue equivariantly in the cover. In fact, local rigidity for small cone angles developed by Hodgson and Kerckhoff in suggests that this can be done just as easily for the cone manifold case, but this is superfluous for our applications. ###### Example 1.2. Let $`M`$ be a manifold with a single cusp $`c`$ which admits a $`/3`$ symmetry that acts on $`c`$ as a rotation. This symmetry forces $`c`$ to be hexagonal. Let $`p`$ be a point in $`M`$ not fixed by $`/3`$, and let $`M^{}`$ be obtained by equivariantly removing $`3`$ balls centered at the orbit of $`p`$. Glue another manifold with $`/3`$ symmetry to $`M^{}`$ along its spherical boundaries $`S_1,S_2,S_3`$ so that the symmetries on either side are compatible, to make $`M^{\prime \prime }`$. Let $`K`$ be a $`/3`$-invariant knot in $`M^{\prime \prime }`$ which intersects each of the spheres $`S_i`$ in $`4`$ points and which is sufficiently complicated that its complement in $`M^{\prime \prime }`$ is atoroidal and the $`4`$–punctured spheres $`S_i`$ are incompressible and $``$–incompressible. Perform a mutation on $`S_1`$ which destroys the $`/3`$–symmetry to get a manifold $`N`$ with two cusps which we refer to as $`c`$ and $`d`$, where $`d`$ corresponds to the core of $`K`$. Then $`c`$ is hexagonal, since mutation does not affect the geometric structure away from the splitting surface. Moreover, for large integers $`r`$, any generalized $`(r,0)`$ surgery on $`d`$ will preserve the fact that the $`S_i`$ are incompressible, $``$–incompressible spheres with $`4`$ cone points of order $`r`$, and therefore we can undo the mutation on $`S_1`$ to see that these surgeries do not affect the hexagonal structure on $`c`$. But if a real $`1`$–parameter family of surgeries on $`d`$ do not affect the structure on $`c`$, then every surgery on $`d`$ keeps the structure on $`c`$ fixed, so $`c`$ is isolated from $`d`$. Note that for general $`(p,q)`$ surgeries on $`d`$, the spheres $`S_i`$ will be destroyed and the surgered manifold will not be mutation–equivalent to a $`/3`$ symmetric manifold. Moreover, for sufficiently generic $`K`$, there will be no rigid triangle orbifolds in $`N`$ separating $`c`$ from $`d`$. As far as we know, this is the first example of geometric isolation to be constructed that is not forced by a rigid separating or boundary parallel surface. We note in passing that mutation followed by surgery often preserves other analytic invariants of hyperbolic cusped manifolds, such as volume and (up to a constant), Chern–Simons invariants. Ruberman observed in that if $`K`$ and a Conway sphere $`S`$ are unlinked, then a mutation which preserves this unlinking can actually be achieved by mutation along the genus $`2`$ surface obtained by tubing together the sphere $`S`$ with a tubular neighborhood of $`K`$. This mutation corresponds to the hyperelliptic involution of a genus $`2`$ surface. Since this surface is present and incompressible for all but finitely many surgeries on $`K`$ and its mutant, the surgered mutants are mutants themselves. If $`S`$ and $`K`$ are not unlinked, no such hyperelliptic surface can be found, and in fact in this case there is no clear relationship between invariants of the manifold obtained after surgeries on the original knot and on the mutant. Perhaps this makes the persistence of isolation under mutation more interesting, since it shows that some of the effects of mutation are persistent under surgeries where other effects are destroyed. ## 2. Napoleon’s theorem ### 2.1. Triangulations and hyperbolic surgery A finite volume complete but not compact hyperbolic $`3`$–manifold $`M`$ can be decomposed into a finite union of (possibly degenerate) ideal tetrahedra glued together along their faces. An ideal tetrahedron determines and is determined by its fourtuple of endpoints on the sphere at infinity of $`^3`$. Identifying this sphere with $`^1`$, we can think of the ideal tetrahedron as a $`4`$–tuple of complex numbers. The isometry type of the tetrahedron is determined by the cross–ratio of these $`4`$ points; equivalently, if we move three of the points to $`0,1,\mathrm{}`$ by a hyperbolic isometry, the isometry type of the tetrahedron is determined by the location of the $`4`$th point, that is by a value in $`\{0,1\}`$. This value is referred to as the simplex parameter of the ideal tetrahedron. A combinatorial complex $`\mathrm{\Sigma }`$ of simplices glued together can be realized geometrically as an ideal triangulation of a finite volume hyperbolic manifold (after removing the vertices of $`\mathrm{\Sigma }`$) if certain equations in the simplex parameters are satisfied. These equations can be given explicitly by examining the links of vertices of the triangulation. For a finite volume non–compact hyperbolic manifold, all the links of vertices of $`\mathrm{\Sigma }`$ should be tori $`T_j`$. There are induced triangulations $`\tau _j`$ of these tori by the small triangles obtained by cutting off the tips of the tetrahedra in $`\mathrm{\Sigma }`$. Let $`\mathrm{\Sigma }=_i\mathrm{\Delta }_i`$ and let $`z_i`$ be a hypothetical assignment of simplex parameters to the tetrahedra $`\mathrm{\Delta }_i`$. Then the horoball sections of a hypothetical hyperbolic structure on $`\mathrm{\Sigma }^{}=\mathrm{\Sigma }\text{vertices}`$ are Euclidean tori triangulated by Euclidean triangles, which are the horoball sections of the $`\mathrm{\Delta }_i`$. If the simplex parameter of an ideal tetrahedron is $`z`$, the Euclidean triangles obtained as sections of the horoballs centered at its vertices have the similarity type of the triangle in $``$ with vertices at $`0,1,z`$. A path in the dual $`1`$–skeleton to $`\tau _j`$ determines a developing map of $`\tau _j`$ in $``$: given a choice of the initial triangle, there is a unique choice of each subsequent triangle within its fixed Euclidean similarity type such that the combinatorial edge it shares with its predecessor is made to geometrically agree with it. The holonomy of a closed path in this dual skeleton is the Euclidean similarity taking the initial triangle to the final triangle. There are two necessary conditions to be met in order for the hyperbolic structures on the $`\mathrm{\Delta }_i`$ to glue up to give a hyperbolic structure on $`\mathrm{\Sigma }^{}`$. These conditions are actually sufficient in a small neighborhood of the complete structure as described in and made rigorous in . * The edge equations: the holonomy around a vertex of $`\tau _j`$ should be trivial. * The cusp equations: the holonomy around the meridian and longitudes of the $`T_j`$ should be translations. These “equations” can be restated as identities of the form $$\underset{i}{}c_{ij}\mathrm{ln}(z_i)+d_{ij}\mathrm{ln}(1z_i)=\pi ie_j$$ for some collection of integers $`c_{ij},d_{ij},e_j`$ and some appropriate choices of branches of the logarithms of the $`z_i`$. A $`(p,q)`$ hyperbolic dehn surgery translates in this context to replacing the cusp equations for some cusp by the condition that the holonomy around the meridian and longitude give Euclidean similarities $`h_m,h_l`$ such that $`h_m^ph_l^q=\text{id}`$. The analytic co–ordinates on Dehn surgery space determined by the analytic parameters $`z_i`$ are holomorphically related to the $`u`$ co–ordinates alluded to earlier. The geometry of a complete cusp is determined by the translations corresponding to the holonomy of the meridian and longitude. This is all described in great detail in and . ### 2.2. Tessellations with forced symmetry In this section, we produce examples of isolation phenomena which do not come from rigidity or mutation, but rather from the following theorem, known as “Napoleon’s Theorem” (): ###### Theorem 2.1 (“Napoleon’s Theorem”). Let $`T`$ be a triangle in $`^2`$. Let $`E_1,E_2,E_3`$ be three equilateral triangles constructed on the sides of $`T`$. Then the centers of the $`E_i`$ form an equilateral triangle. Proof: Let the vertices of $`T`$ be $`0,1,z`$. The three centers of the $`E_i`$ are of the form $`a_iz+b_i`$ for certain complex numbers $`a_i,b_i`$ so the shape of the resulting triangle is a holomorphic function of $`z`$. Let $`\omega =(1+\frac{i}{\sqrt{3}})/2`$. If $`z`$ is real and between $`0`$ and $`1`$ then the center of $`E_1`$ splits the line between $`0`$ and $`\omega `$ in the ratio $`z,1z`$, the center of $`E_2`$ splits the line between $`\omega `$ and $`1`$ in the ratio $`z,1z`$, and the center of $`E_3`$ is at $`1\omega `$. But a clockwise rotation through $`\pi /3`$ about $`1\omega `$ takes the line between $`0`$ and $`\omega `$ to the line between $`\omega `$ and $`1`$; i.e. it takes the center of $`E_1`$ to the center of $`E_2`$. Thus the theorem is true for real $`z`$ and by holomorphicity, it is true for all $`z`$. Napoleon’s theorem gives rise to an interesting phenomenon in plane geometry: fix a triangle $`T`$. Then three triangles isometric to $`T`$ and three equilateral triangles with side length equal to the sides of $`T`$ can be glued together to make a hexagon which tiles the plane with symmetry group $`𝖲(3,3,3)`$ as in figure 1. The edge lengths of the three equilateral triangles are equal to the three edge lengths of $`T`$, and around each vertex the angles are the three interior angles of $`T`$ together with three angles equal to $`\frac{2\pi }{6}`$. It follows that the hexagon in question exists and tiles the plane; to see that it has the purported symmetry group, observe that the combinatorics of the triangulation have a $`3`$–fold symmetry about the centers of the three equilateral triangles bounding some fixed triangle of type $`T`$. These three centers are the vertices of an equilateral triangle, by Napoleon’s theorem; it follows that the symmetry group of the tiling contains the group generated by three rotations of order $`3`$ with centers at the vertices of an equilateral triangle — that is to say, the symmetry group $`𝖲(3,3,3)`$. If we imagine that these triangles are the asymptotic horoball sections of ideal tetrahedra going out a cusp of a hyperbolic manifold, we see that appropriate deformations of the tetrahedral parameters change the triangulation but not the geometry of the cusp. For $`T`$ a horoball section of an ideal tetrahedron with simplex parameter $`z`$, the holonomy around vertices is just $$z\omega \frac{z1}{z}\omega \frac{1}{1z}\omega =1$$ where $`\omega =\frac{1+i\sqrt{3}}{2}`$ is the similarity type of an equilateral triangle, and the holonomy around the meridian and longitude are both translations by complex numbers $`z_1,z_2`$ whose ratio is $`\frac{1+i\sqrt{3}}{2}`$. ###### Example 2.1. A scheme to parlay this theorem into isolation phenomena is given by the following setup: we have two hexagons $`H_1,H_2`$ each tiled by six equilateral triangles. We divide these twelve triangles into two similarity types, corresponding to the triangles in $``$ with vertices at $`\{0,1,z\}`$ and $`\{0,1,w\}`$, and we choose a preferred vertex for each triangle corresponding to $`0`$, in the manner indicated in figure 2. For an arbitrary choice of complex numbers $`z`$ and $`w`$, the hexagons can be realized geometrically to give affine structures on the tori obtained by gluing opposite sides of $`H_1`$ together and similarly for $`H_2`$. Initially set $`z=w=\frac{1+i\sqrt{3}}{2}`$. Deforming $`z`$ but keeping $`w`$ fixed changes the affine structure on the first torus but leaves the structure on the second torus unchanged. For, the combinatorial triangulation of the second torus is exactly the triangulation of a fundamental domain of the tessellation in figure 1. It follows that for $`w=\frac{1+i\sqrt{3}}{2}`$ and $`z`$ arbitrary, the universal cover of the second torus is tiled by a tessellation with symmetry group $`𝖲(3,3,3)`$ and the torus is therefore hexagonal. Similarly, deforming $`w`$ and keeping $`z=\frac{1+i\sqrt{3}}{2}`$ changes the affine structure on the second torus but leaves the structure on the first torus unchanged, since now we can identify the combinatorial triangulation of the first torus with the triangulation of a fundamental domain of the tessellation in figure 1. This configuration of ideal tetrahedra with horoball sections equal to the two cusps in this figure can be realized geometrically by arranging six regular ideal tetrahedra in the upper–half space in a hexagonal pattern with the common edge of the tetrahedra going from $`0`$ to $`\mathrm{}`$. The pattern seen from infinity looking down is $`H_1`$, and the pattern seen from $`0`$ looking up is $`H_2`$. The pictures are aligned so that the real line has its usual orientation. Glue the twelve free faces of the tetrahedra in such a way as to make $`H_1`$ and $`H_2`$ torus cusps. This gives an orbifold $`N`$ whose underlying manifold is $`T^2\times I`$, with orbifold locus three arcs of cone angle $`2\pi /3`$ each running between two $`(3,3,3)`$ triangle cusps arranged in the obvious symmetrical manner. Unfortunately, under surgeries of the cusps of $`N`$, the simplices do not deform in the manner required by Napoleon’s theorem. However, if we pass to a $`3`$-fold cover, this problem can be corrected. Let $`L`$ be the link depicted in figure $`3`$, and let $`M=S^3L`$. Then $`M`$ admits a complete hyperbolic structure which can be decomposed into $`18`$ regular ideal tetrahedra. It follows that $`M`$ is commensurable with the figure $`8`$ knot complement. In fact, $`M`$ is the $`3`$-fold cover of $`N`$ promised above. Geometrically, arrange $`18`$ regular ideal tetrahedra in the upper half-space with a common vertex at infinity so that a horoball section intersects the collection the pattern depicted in figure $`4`$. If we glue the $`12`$ external vertical sides of this collection of tetrahedra in the indicated manner, it gives a horoball section of one of the cusps of $`M`$. It remains to glue up the $`18`$ faces of the ideal tetrahedra with all vertices on $``$. Figure $`4`$ has an obvious decomposition into $`3`$ regular hexagons, each composed of $`6`$ equilateral triangles. The $`3`$ edges of the complex associated to the centers of these three hexagons have a common endpoint at $`\mathrm{}`$, and intersect $``$ at three points $`p_1,p_2,p_3`$. Six triangles come together at each of the points $`p_i`$, and a horoball centered at each of these points intersects $`6`$ tetrahedra in a hexagonal pattern. Glue opposite faces of these hexagons to produce $`3`$ cusps centered at $`p_1,p_2,p_3`$. The result of all this gluing produces the manifold $`M`$. By construction, this particular choice of triangulation produces $`6`$ hexagonal cusps, $`3`$ made up of $`6`$ equilateral triangles and $`3`$ made up of $`18`$ equilateral triangles. The components of $`L`$ fall into two sets of $`3`$ links, depicted in the figure as the darker and the lighter links, which we denote by $`c_1,c_2,c_3`$ and $`d_1,d_2,d_3`$ respectively. The group of symmetries of $`M`$ permutes the cusps by the group $`(𝖲_3\times 𝖲_3)/2`$ where the conjugation action of the generator of $`/2`$ takes $`(\sigma _i,\sigma _j)`$ to $`(\sigma _j,\sigma _i)`$. If $`G`$ is the entire group of symmetries of $`M`$, then there is a short exact sequence $$0/3G(𝖲_3\times 𝖲_3)/20$$ so the order of $`G`$ is $`216`$. These links have the property that a surgery on $`c_i`$ keeps $`c_{i+1}`$ and $`c_{i+2}`$ hexagonal, but distorts the structure at the $`d_j`$, and vice versa. On the other hand, a surgery on both $`c_i`$ and $`c_{i+1}`$ does distort the structure at $`c_{i+2}`$. In terms of the picture already described, the configuration of $`18`$ triangles in figure 4 decomposes into 3 hexagons of 6 triangles. These three hexagons glue up in the obvious way to give hexagonal triangulations of the $`c_i`$. Under surgery on $`c_i`$, the tetrahedra intersecting $`c_i`$ deform to satisfy the new modified cusp equations. Under this deformation, the other triangle types must deform to keep the other cusps complete. It can be easily checked that the triangulations of the cusps $`c_{i+1},c_{i+2}`$ lift to the symmetric tessellations of $`^2`$ depicted in figure 1, and therefore the similarity types of these cusps stay hexagonal. However, under surgery on both $`c_i`$ and $`c_{i+1}`$, the similarity types of triangles making up cusp $`c_{i+2}`$ are not related in any immediately apparent way to the picture in figure 1. The proof that $`c_2`$ and $`c_3`$ are isolated from $`c_1`$ is essentially just a calculation that under surgery on $`c_1`$ say, the simplex parameters solving the relevant edge and cusp equations for the combinatorial triangulation vary as indicated in figure 4. One may check experimentally that the similarity type of $`c_3`$ is not constant under fillings on both $`c_1`$ and $`c_2`$, using for example, Jeff Weeks’ program snappea, available from , for finding hyperbolic structures on $`3`$–manifolds, or Andrew Casson’s provably accurate program cusp (). ###### Definition 2.1. Say that $`3`$ cusps are in Brunnian isolation when a surgery on one of them leaves invariant the structure at the other $`2`$, but a surgery on two of the cusps can change the structure of the third. With this definition, we observe that Napoleon’s $`3`$–manifold has two sets of cusps in Brunnian isolation. One can see that there is an automorphism of $`S^3`$ of order $`2`$ fixing two components of $`L`$ and permuting the other $`4`$ components in pairs. The quotient by this automorphism is an orbifold $`N`$ which has two regular cusps and two pillow cusps. We call the pillow cusps $`c_p,d_p`$, and the regular cusps $`c_r,d_r`$ where $`c_p`$ is a quotient of $`c_1`$, $`c_r`$ is covered by $`c_2c_3`$, and similarly for the $`d_i`$. The cusp $`c_p`$ is first-order isolated but not isolated from $`c_r`$. Similarly, $`d_p`$ is first-order isolated but not isolated from $`d_r`$. This can be easily observed by noting that $`c_r`$ is isolated from $`c_p`$, and therefore it is first-order isolated from it (by the corresponding properties in the cover). It follows that $`c_p`$ is first-order isolated from $`c_r`$, by . To see that $`c_p`$ is not isolated from $`c_r`$, it suffices to pass to the cover and perform an equivariant surgery there. Alternatively, one can easily check by hand using snappea that the geometry of the cusp $`c_p`$ changes when one performs surgery on $`c_r`$. ###### Example 2.2. The $`2`$–cusped orbifold $`A`$ first described in and studied in displays Napoleonic tendencies, where the version of Napoleon’s theorem we use now concerns right triangles. It is obtained by $`(2,0)`$ surgery on the light cusps of the link complement portrayed in figure 5. Coincidentally, it is $`2`$–fold covered by that very link complement. Let $`T`$ be the right triangle with side lengths $`\{1,1,\sqrt{2}\}`$, and let $`S`$ be the unit square in $``$. Pick a point $`pS`$ and construct four triangles $`T_i`$ all similar to $`T`$ with one vertex of the diagonal at $`p`$ and the other vertex of the diagonal at a vertex of $`S`$, such that the triangle is clockwise of the diagonal, seen from $`p`$. Then the $`8`$ vertices of these triangles away from $`p`$ are the vertices of an octagon which tiles the plane with quotient space the square torus. This corresponds exactly to the triangulation of a horoball section of a complete cusp in the orbifold $`A`$ after a deformation of the other cusp. One observes that the link complement in figure 3 is obtained from the link complement in figure 5 by drilling out two curves. Are the isolation phenomena associated with the two links related?
no-problem/9909/astro-ph9909396.html
ar5iv
text
# Supernova 1987A: A Young Supernova Remnant in an Aspherical Progenitor Wind ## 1. Introduction The circumstellar material around SN 1987A shows striking deviations from spherical symmetry, in particular in the form of the “three-ring circus” spectacularly imaged by HST (Burrows et al. 1995). This nebulosity shows a distinct bipolar structure, resembling many of the planetary nebulae shown at this meeting. In the case of SN 1987A, supernova ejecta are rapidly propagating outwards from the center of this structure, producing radio, optical and X-ray emission as they collide with surrounding material. Observations of SN 1987A are thus an excellent probe of the mass-loss history of a supernova progenitor. Many authors have considered the nature of the triple-ring system surrounding SN 1987A. For the purposes of interpreting the interaction between the supernova ejecta and this material, we adopt the “standard model” (Blondin & Lundqvist 1993; Martin & Arnett 1995), namely that: * the progenitor star was a red supergiant (RSG) until $``$20 000 yr, during which time it produced a slowly moving, dense wind; * the star then evolved into a blue supergiant (BSG), producing a fast moving and low density wind; * the rings correspond to the bipolar interface produced by the interaction between the RSG and BSG winds; * the RSG wind was densest in the equatorial plane (perhaps produced by rotation and/or binarity in the progenitor), while the BSG wind was isotropic. We note that this model certainly has its problems, and that many alternatives have been proposed (e.g. Soker 1999). ## 2. Radio Flux Monitoring Radio emission was detected from SN 1987A just 2 days after the supernova explosion (Turtle et al. 1987). This emission peaked on day 4, before following a power law decay to become undetectable by day 150 (Ball et al. 1995). This radio outburst as been interpreted as synchrotron emission produced as the supernova shock passed through the innermost regions of the BSG wind (Storey & Manchester 1987). After $`>`$3 years of radio silence, radio emission was re-detected from SN 1987A in mid-1990 (Staveley-Smith et al. 1992). Since then, emission has shown a monotonic increase with a spectral index $`\alpha 1`$ ($`S_\nu \nu ^\alpha `$) (Gaensler et al. 1997). X-ray emission from the system turned on at around the same time, and has since also steadily increased (Hasinger, Aschenbach & Trümper 1996). This behavior suggests that the shock, having freely expanding through the BSG wind, has now run into a density jump. ## 3. Radio Imaging Observations of SN 1987A with the Australia Telescope Compact Array (ATCA) at 9 GHz can resolve the radio emission from SN 1987A. By fitting a thin spherical shell to the radio emission at each epoch, the expansion of the source with time can be quantified. These data, shown in Figure 1, show a linear expansion rate of $``$3500 km s<sup>-1</sup> from day 1500 onwards. Interpolating between day 0 and day 1500 implies an initial expansion rate $`>`$35 000 km s<sup>-1</sup>, consistent with VLBI and H$`\alpha `$ measurements made shortly after the explosion. These data thus indicate that the supernova shock experienced a rapid deceleration at or just before radio and X-ray emission were redetected in mid-1990. The diffraction-limited resolution of the ATCA is $`0.^{\prime \prime }9`$, but using super-resolution we can produce a sequence of radio images of SN 1987A with a slightly higher resolution of $`0.^{\prime \prime }5`$ (see Gaensler et al. 1997). These images (see Figure 2) show the emission to have a shell-like structure; the morphology is dominated by two lobes to the east and west. An overlay between the optical ring system and the radio data (Reynolds et al. 1995; Gaensler et al. 1997) shows the radio shell to be centered on the position of the supernova, but with a radius only $``$90% of that of the optical ring. Thus although the supernova shock appears to have run into a density jump, this jump must be located within the interface between the RSG and BSG winds. The radio/optical overlay also shows that the radio lobes align with the major axis of the optical rings, when projected onto the sky. Gaensler et al. (1997) interpret this as indicating that radio emission is confined to the equatorial plane of the progenitor system, a result recently confirmed by STIS data (Michael et al. 1998). ## 4. Interpretation The abrupt radio and X-ray turn-on in mid-1990, as well as the rapid deceleration of the shock at around the same time, can be explained in terms of the “standard” interacting winds model, with the addition of a dense H ii region just inside the bipolar interface (Chevalier & Dwarkadas 1995). This region, produced by UV photons from the BSG ionizing the surrounding RSG wind, can, at least to first order, account for the observed light curves, expansion rate and X-ray emission measure. The double-lobed morphology is then interpreted as an axisymmetry in this surrounding material, as discussed in detail by Gaensler et al. (1997). ## 5. The Future Radio monitoring and imaging of SN 1987A will certainly continue; in 2001 the ATCA will be upgraded to a maximum frequency of 100 GHz, giving a significant improvement in spatial resolution. Meanwhile, Chandra and XMM will soon spatially and spectrally resolve the X-ray emission from SN 1987A, giving us a wealth of information about the conditions at the shock. All this is just a prelude to the collision of the supernova ejecta with the inner optical ring, expected in around 2004. At this point SN 1987A will drastically evolve and brighten (perhaps by a factor of $`10^3`$ in every waveband!), providing us with much new information about the progenitor’s circumstellar material. Into the next century and beyond, we can expect that SN 1987A will evolve into a “classical” supernova remnant (SNR), at which point we can perhaps start to relate the complex morphologies of SNRs to the mass-loss histories of their progenitors. ### Acknowledgments. The Australia Telescope is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO. B.M.G. acknowledges the support of NASA through Hubble Fellowship grant HF-01107.01-98A awarded by the Space Telescope Science Institute. ## References Ball, L., Campbell-Wilson, D., Crawford, D. F., & Turtle, A. J. 1995, ApJ, 453, 864 Blondin, J. M., & Lundqvist, P. 1993, ApJ, 405, 337 Burrows, C. J. et al. 1995, ApJ, 452, 680 Chevalier, R. A., & Dwarkadas, V. V. 1995, ApJ, 452, L45 Gaensler, B. M. et al. 1997, ApJ, 479, 845 Hasinger, G., Aschenbach, B., & Trümper, J. 1996, A&A, 312, L9 Martin, C. L., & Arnett, D. 1995, ApJ, 447, 378 Michael, E. et al. 1998, ApJ, 509, L117 Reynolds, J. E. et al. 1995, A&A, 304, 116 Soker, N. 1999 MNRAS, 303, 611 Staveley-Smith, L. et al. 1992, Nature, 355, 147 Storey, M. C., & Manchester, R. N. 1987, Nature, 329, 421 Turtle, A. J. et al. 1987, Nature, 327, 38
no-problem/9909/astro-ph9909001.html
ar5iv
text
# HST - NICMOS Color Transformations and Photometric Calibrations Based on observations with the NASA/ESA Hubble Space Telescope obtained at the Space Telescope Science Institute, which is operated by AURA for NASA under contract NAS5-26555. ## 1 Introduction The transformation of photometric observations from one filter system to another is rarely a trivial task. This is especially true for observations of cool giants at near-IR wavelengths because of the presence of very deep molecular absorption bands both in the stars themselves (eg. CO and H<sub>2</sub>O) and in the earth’s atmosphere (eg. CO<sub>2</sub> and H<sub>2</sub>O). Examples of such transformations may be found in Bessell & Brett (1988) and Elias et al. (1983). Because of these molecular bands the transformation equations can be dependent not only on stellar color, but also on absolute luminosity and stellar metallicity. For the Hubble Space Telescope (HST) Near Infrared Camera and Multi-Object Spectrograph (NICMOS) filters on camera 2 (NIC2; MacKenty et al. (1997)) that have been used as pseudo-$`JHK`$ filters, the transformation to a standard ground-based system presents a particularly difficult case. This is because the commonly used NIC2 filters (F110W for $`J`$, F160W for $`H`$, and F222M for $`K`$) differ significantly from the commonly used ground-based filters (Bessell & Brett (1988)); also there are no telluric absorption features in HST observations. Nonetheless, many scientific programs, including our own, will require accurate (uncertainties no worse than a few percent) knowledge of these transformations. In order to derive accurate color transformations, we devoted two orbits of one of our observing programs (GO-7826) to gathering observations of cool, metal rich giants that have extensive ground-based observations on the CIT/CTIO system. These giants are all in the Baade’s Window field of the Galactic Bulge. The brighter ones and a few of the fainter ones had previously been observed with the same single channel photometric system that was used to establish the CIT/CTIO grid of southern standards (Elias et al. (1982); Frogel & Whitford (1987)). The remainder of the fainter stars had previously been observed with a NICMOS array by Tiede et al. (1995). This paper presents our HST-NICMOS observations of these previously observed bulge giants which span the color range of $`0.7<(JK)<1.6`$. Section two outlines our observations; the reduction procedures are discussed in section three. A comparison with ground-based measurements and the color transformations are in section four. The final section is a brief summary. ## 2 Observations The targets were originally observed on 1998, August 19 with HST, but problems encountered while traversing the South Atlantic Anomaly caused a loss of tracking. Thanks to the gracious reallotment of time by the Telescope Time Review Board (TTRB), our fields were reobserved on 1998, October 28. Of the first observation attempt, we were only able to confidently salvage the observations of the first target, thus our quoted BW1 measurements are actually the average of measurements made on both visits. Our observations consist of eight pointings in Baade’s Window (henceforth referred to as BW1-BW8), from which we have obtained measurements of 19 stars previously observed from the ground. We will refer to these stars as “standard” stars. The first field (BW1) is a small part of the “BW4b” field observed by Tiede et al. (1995) (see Fig. 1). This field is fairly rich, and includes 12 stars observed by Tiede et al. (1995). The remaining 7 fields (BW2-BW8) are of single stars observed previously by Frogel & Whitford (1987). The coordinates of each field are given in Table 1. Our observations were taken with the NICMOS camera 2 (NIC2) which has a plate scale of $``$ 0$`\stackrel{}{\mathrm{.}}`$0757 pixel<sup>-1</sup> giving NIC2 a field of view of 19$`\stackrel{}{\mathrm{.}}`$4 on a side (376 arcsec<sup>2</sup>). The NICMOS focus was set at the compromise position 1-2, which optimizes the focus for simultaneous observations with cameras 1 and 2. Each field was observed in three filters: F110W ($`J`$), F160W ($`H`$), and F222M ($`K`$), using a spiral dither pattern with 4 positions (see section 4 for a discussion of the filters). We used 0$`\stackrel{}{\mathrm{.}}`$4 steps on BW1 to maximize the size of the overlapping field, and 5 $`\stackrel{}{\mathrm{.}}`$0 steps on BW2-BW8 to minimize the effects of residual images from the bright stars. All of our observations used the multiaccum mode (MacKenty et al. (1997)) because of its optimization of the detector’s dynamic range and cosmic ray rejection. The BW1 field used the predefined sample sequences step2, step8 and step16 with 12, 11, and 14 samples in $`J`$, $`H`$, and $`K`$ respectively, yielding exposure times of 18, 48, and 128 seconds. Fields BW2-BW8 implemented the multiaccum sample sequence scamrr, designed for fast temporal sampling with a single camera, which performs a nondestructive read every 0.203 seconds, yielding typical exposure times of 0.41 seconds for each of the four dither positions. Table 1 lists the total exposure times in each filter for all targets. Cosmic ray (CR) rejection using the multiaccum mode requires many intermediate reads. Since most of our targets are of bright stars, our required exposure times were very short with only two to four intermediate reads during each exposure, an inadequate number for effective CR rejection. We therefore rely on the four dither positions for CR and bad-pixel rejection. ## 3 Reduction Our data were reduced using three different techniques. The first technique was the standard STScI pipeline. Afterwards we performed our own reductions with the NICRED v. 1.8 package dated 01/29/99 (McLeod (1997), Lehar et al. (1999)), and with the STScI pipeline using the IRAF NICPROTO package (May 1999) to eliminate any residual pedestal. All the reductions use flat fields and dark frames provided by STScI, although we generated our own bad pixel masks for the NICRED and NICPROTO reductions. Not too surprisingly, all of these methods give similar photometric results, with no apparent systematic deviations and typical dispersions of $`0.05`$ magnitudes. Although the images produced by NICRED appear cleaner in terms of CRs and bad pixels, all of our photometric measurements have been made off the frames reduced with the STScI pipeline and IRAF NICPROTO package. Aperture photometry was performed according to the guidelines set down by STScI `(http://www.stsci.edu/ftp/instrument_news/NICMOS/nicmos_doc_phot.html)` for point-sources observed with NIC2. In our case we used the IRAF PHOT routine with a 0$`\stackrel{}{\mathrm{.}}`$5 aperture and a $`1^{\prime \prime }`$ sky annulus placed directly outside the aperture, assuming a NIC2 plate scale of 0$`\stackrel{}{\mathrm{.}}`$0757 pixel<sup>-1</sup>. The measured count rate of each star was multiplied by 1.15 to correct to an infinite aperture, and then converted to flux using the NICMOS photometric keywords released December 1, 1998, obtained from `http://www.stsci.edu/ftp/instrument_news/NICMOS/NICMOS_phot/keywords.html`, and listed in Table 2. Finally the fluxes were converted to magnitudes using the parameters shown in Table 2. (Note that the PHOTZPTs in Table 2 are not those given by STScI, but have been calculated to force the transformation to the CIT/CTIO system to be zero at zero ($`m_{110}m_{222}`$) color, as can be seen in eqn. 6.) One should also note that there are several methods of converting DN to magnitudes. Two ways are to use either PHOTFLAM & PHOTZPT or PHOTFNU & ZP(VEGA), which are not equivalent and give different magnitudes (because of a zero-point shift). We present our NICMOS magnitudes (ie. $`m_{110}`$, $`m_{160}`$, $`m_{222}`$) as obtained using PHOTFNU & ZP(VEGA) so that we can directly compare our observations with the STScI standards (see Table 4), but we have calculated the transformations for both methods (eqns. 4-7). Another method, and the easiest of these presented, is to use our transformations to go directly from instrumental magnitudes to CIT/CTIO magnitudes (see eqns. 8-9). ## 4 Comparison with Ground-Based Observations Our measurements of the faint stars in our BW1 field are compared with the observations of Tiede et al. (1995). Their observations were made on the 2.5m DuPont telescope at Las Campanas Observatory with the IRCAM (Persson et al. (1992)), which used a $`256\times 256`$ HgCdTe NICMOS 3 detector with a plate scale of 0.348 arcseconds pixel<sup>-1</sup>. Their measurements were calibrated from 11 stars measured previously with single-channel photometry (Frogel & Whitford (1987) – the same system used for the ground-based measurements of the stars in BW2-BW8) to the CIT/CTIO system (Elias et al. (1982), 1983). These measurements have reported average errors given in Fig. 3 of Tiede et al. (1995) ($`\sigma _K(16)0.1`$), and calibration errors of $`\sigma _J=0.02`$, $`\sigma _H=0.03`$, $`\sigma _K=0.02`$ magnitudes. Our measurements of the single stars in fields BW2-BW8 are compared with the measurements of Frogel & Whitford (1987). These were obtained on the CTIO 4m and the D3 InSb system, and transformed to the CIT/CTIO standard system. Note that their published Table 1 is corrected for reddening and extinction by $`K_0=K0.14`$, $`(JK)_0=(JK)0.26`$, and $`(HK)_0=(HK)0.09`$, while our values are uncorrected. In Figure 2 we have plotted the difference between the ground-based measurements and our NICMOS PHOTFNU & ZP(VEGA) calibrated measurements against color for each filter. These plots show that a color term is required to bring the NICMOS data into agreement with ground-based observations. By applying a linear fit to the data, giving stars fainter than $`m_K=10`$ half weight (points with $`(m_{110}m_{222})<1.6`$), we have calculated the appropriate transformations (eqns. 4-9). As discussed in section 3, STScI supplies two sets of keywords for converting count rates to magnitudes. To eliminate confusion, equations 1-3 explicitly state how the magnitudes used in calculating our transformations were determined. $`m(\mathrm{RAW})=`$ $`2.5\times \mathrm{log}(\mathrm{CR}_{1/2})`$ (1) $`m(\mathrm{PHOTFLAM})=`$ $`2.5\times \mathrm{log}(\mathrm{CR}_{1/2}\times 1.15\times \mathrm{PHOTFLAM})\mathrm{PHOTZPT}`$ (2) $`m(\mathrm{PHOTFNU})=`$ $`2.5\times \mathrm{log}(\mathrm{CR}_{1/2}\times 1.15\times \mathrm{PHOTFNU}/\mathrm{ZP}(\mathrm{VEGA}))`$ (3) where CR<sub>1/2</sub> is the count rate measured in a 0$`\stackrel{}{\mathrm{.}}`$5 aperture. Using the PHOTFNU & ZP(VEGA) keywords to calculate $`m_{nicmos}`$: $`\begin{array}{cc}m_J=m_{110}(0.198\pm 0.036)(m_{110}m_{222})(0.179\pm 0.060)& \\ m_H=m_{160}(0.177\pm 0.037)(m_{110}m_{222})+(0.186\pm 0.065)& \mathrm{\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}1.0}<(m_{110}m_{222})<2.3\\ m_K=m_{222}+(0.074\pm 0.037)(m_{110}m_{222})(0.135\pm 0.061)& \end{array}`$ (7) $`\begin{array}{cc}m_J=m_{110}(0.344\pm 0.063)(m_{110}m_{160})(0.091\pm 0.077)& \mathrm{\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}0.8}<(m_{110}m_{160})<1.6\\ m_H=m_{160}(0.305\pm 0.065)(m_{110}m_{160})+(0.259\pm 0.081)& \end{array}`$ (10) Using the PHOTFLAM & our PHOTZPT (from Table 2) to calculate $`m_{nicmos}`$: $`\begin{array}{cc}m_J=m_{110}(0.198\pm 0.036)(m_{110}m_{222})\pm 0.058& \\ m_H=m_{160}(0.177\pm 0.037)(m_{110}m_{222})\pm 0.063& \mathrm{\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}0.9}<(m_{110}m_{222})<2.2\\ m_K=m_{222}+(0.074\pm 0.037)(m_{110}m_{222})\pm 0.059& \end{array}`$ (14) $`\begin{array}{cc}m_J=m_{110}(0.344\pm 0.063)(m_{110}m_{160})(0.026\pm 0.054)& \mathrm{\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}0.4}<(m_{110}m_{160})<1.2\\ m_H=m_{160}(0.305\pm 0.065)(m_{110}m_{160})(0.027\pm 0.058)& \end{array}`$ (17) Going directly from RAW instrumental nicmos magnitudes: $`\begin{array}{cc}m_J=m_{110}(0.198\pm 0.036)(m_{110}m_{222})+(21.754\pm 0.030)& \\ m_H=m_{160}(0.177\pm 0.037)(m_{110}m_{222})+(21.450\pm 0.028)& 1.3<(m_{110}m_{222})<0.0\\ m_K=m_{222}+(0.074\pm 0.037)(m_{110}m_{222})+(20.115\pm 0.031)& \end{array}`$ (21) $`\begin{array}{cc}m_J=m_{110}(0.344\pm 0.063)(m_{110}m_{160})+(22.054\pm 0.034)& \mathrm{\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}0.1}<(m_{110}m_{160})<0.9\\ m_H=m_{160}(0.305\pm 0.065)(m_{110}m_{160})+(21.715\pm 0.037)& \end{array}`$ (24) These transformations are required due to the differences between CIT/CTIO and NICMOS filter bandpasses. The NICMOS F110W filter is more than twice as wide as the $`J`$ filter and extends 0.3 µm bluer. The F160W filter is about 35% wider than the $`H`$ filter and extends 0.1 µm bluer, while the F222M filter is about half as wide as the $`K`$ filter. Since NICMOS observations are sensitive to wavelengths at which radiation is completely absorbed by the Earth’s atmosphere, any transformation between NICMOS and ground-based systems will only be valid for stars with similar spectral features. We therefore must emphasize that our transformations are only for late-type stars with molecular absorption bands, especially H<sub>2</sub>O and CO, and would not accurately transform heavily reddened blue stars for example. STScI has observed six stars with NICMOS in order to establish a transformation between the F110W, F160W, and F222M filters and their ground-based counterparts $`J,H,K`$. Only five of these stars have accurate $`J,H,K`$ ground-based measurements available (Table 4); these are over-plotted in Figure 3. The NICMOS measurements have been calibrated on the ZP(VEGA) system. The one star which lies in our color range is in excellent agreement with our observations and the other four allow us to estimate an extension of our transformation beyond the color limits of our sample. The one potential problem with considering the STScI stars outside our color range is that they are not all late-type stars with significant molecular absorption bands. The two stars blue-ward of our sample are both G stars, and except for the $`H`$ band, are consistent with our derived transformation. The two stars red-ward of our sample consist of a heavily reddened B star and an M3 star, which seem to indicate that a change in slope occurs around $`(m_{110}m_{222})2.5`$, at which point the transformation slopes drop to zero, maintaining a constant offset. These offsets are approximately -0.68, -0.19, and -0.01 for $`J`$,$`H`$, and $`K`$ respectively. Low-resolution infrared spectra of six late-type stars with colors in the range $`0.6<(JK)<1.8`$ and $`\alpha `$ CMa (A1 V) were taken above the atmosphere with a balloon-borne telescope (Woolf et al. (1964)). Convolving these spectra with the band-passes of the HST filters and instrument throughput, we simulate NICMOS photometric observations. Simulated ground-based photometry was produced by convolving the same spectra with $`JHK`$ filter band-passes and a transmission profile of the Earth’s atmosphere. This analysis yields slopes similar to the ones we derived above using actual photometric observations (Figure 3), in that the F110W and F160W filters give magnitudes too faint for redder stars, while the F222M filter gives magnitudes too bright for redder stars. In Figure 3 the late-type stars are represented by filled circles and are the only points used for the linear least-squares fit. It is important to note that slopes obtained from the simulation are close to those of our transformations given in eqn. 4 (the intercepts are merely zero-point shifts and are irrelevant in this analysis). The main uncertainty comes from the need to extrapolate the Woolf et al. (1960) spectra which only go down to $`0.96\mathrm{\mu m}`$ to the lower transmission limit of the NICMOS F110W filter at $`0.76\mathrm{\mu m}`$. In order to simulate the heavily reddened B star in the STScI standards, we have artificially reddened the A star $`\alpha `$ CMa with $`A_J`$ of 3.5 and 4.0, using the NIR extinction law from Mathis (1990) of $`A_\lambda =A_J(\frac{\lambda }{1.25\mu m})^{1.70}`$. These are the two open circles which lie at $`(m_{110}m_{222})`$ of $``$ 2.7 and 3.1. This A star seems to confirm that the transformation is valid much bluer than our sample for earlier type stars, while the reddened A star also shows the same change in slope which was indicated by the STScI sample. Thus the reddened A star validates our technique of simulating photometry in that we can reproduce what was observed not only for the late-type stars but for the STScI reddened B star as well. ## 5 Summary Our observations reveal that a color correction is required to transform HST NICMOS measurements to the CIT/CTIO photometric system. The photometric keywords are listed in Table 2, and the transformations are given in section 4. The validity of the transformation is confirmed with photometry simulated by convolution of filter transmission curves with late-type stellar spectra taken above the atmosphere. The transformations we have derived are directly applicable ONLY to NIC2 because of differences in detectors, filters, and optics. They may serve as a guide for observations with NIC1 and NIC3, but for precise photometry with these two cameras, a procedure similar to what we have done would need to be carried out. These transformations are also only valid for late-type stars with molecular absorption bands because NICMOS is sensitive to wavelengths which never reach ground-based observatories. We would again like to thank the TTRB for granting us time to reacquire the observations lost to the South Atlantic Anomaly. Support for this work was provided by NASA through grant number GO-7826 from the Space Telescope Science Institute. Daniela Calzetti, Antonella Nota, and Alfred Schultz at STScI provided help in understanding our data. We would also like to thank Glenn Tiede for supplying us with the data for the BW4b field, Brian McLeod for his help in using NICRED, Marcia Rieke for her discussion on the STScI standards, and Paul Martini for helpful comments.
no-problem/9909/cond-mat9909065.html
ar5iv
text
# Critical States in a Dissipative Sandpile Model \[ ## Abstract A directed dissipative sandpile model is studied in the two-dimension. Numerical results indicate that the long time steady states of this model are critical when grains are dropped only at the top or, everywhere. The critical behaviour is mean-field like. We discuss the role of infinite avalanches of dissipative models in periodic systems in determining the critical behaviour of same models in open systems. PACS numbers: 64.60.Ht, 05.65.+b, 45.70.Ht, 05.40.-a \] Spontaneous emergence of long range spatio-temporal correlations in driven dynamical systems without fine tuning of any control parameter, is the concept of Self-Organized Criticality (SOC) . Since its introduction in 1987 , the precise conditions which are necessary and sufficient for SOC, are subjected to intense scrutiny. The question which attracted much attention is, can one have criticality if there is a non-zero rate of bulk dissipation? While some works at the early stages suggested that indeed, the conservation of the transported quantity in the dynamical rules is a necessity, the later works claimed a negative answer. In this paper, we study a directed dissipative sandpile model and our numerical results indicate that it is critical. We argue, that a dissipative model may be critical provided the dissipation is not too strong and conjecture a criterion to determine the critical behaviour. In the sandpile model of SOC, sand grains are locally injected and transported on an arbitrary lattice. Too many grains cannot be accommodated at any site. A site relaxes if the number of grains exceeds certain cut-off and transfers the grains equally to the neighbouring sites. This transfer process is conservative, where no grain is lost or created. At the critical state, cascades of relaxations follow due to single injection of grains, which are called avalanches. Grains, however, dissipate out of the system through the boundary, otherwise no steady state is possible. This is called the Abelian Sandpile Model (ASM) . A globally driven conservative earthquake model is also similarly defined where energy is fed uniformly at all sites and transported . This model reproduces power laws of energy release similar to the Gutenberg-Richter law . There are some studies on the dissipative models also. MKK studied a sandpile model where a grain can dissipate during a relaxing event, in a probabilistic manner. Numerical findings show that the system reaches a sub-critical state with the characteristic sizes of the avalanches depending inversely on the probability of dissipation . On the other hand, the dissipative ASM showed criticality with mean-field like critical behaviour . One dimensional version of this model also showed critical behaviour even with finite driving rate . The OFC model studied the dissipative earthquake model, where dissipation is controlled by a parameter $`\alpha `$. It is claimed that the OFC model is critical for $`\alpha _c<\alpha <\alpha _o`$, the conservative value of $`\alpha `$ being $`\alpha _o`$, with the critical behaviour depending on $`\alpha `$ . The stochastic version of OFC model, however, is shown to loose criticality for any $`\alpha <\alpha _o`$ . In a conservative model of SOC the grains move a distance of the order of the system size $`L`$ when started from the inner most region. This makes the average avalanche size grow as a power of $`L`$, so that an infinite system has a power law distribution of the avalanche sizes. In contrast, in a dissipative model, the grains dissipate at any distance within the system. If all grains do dissipate within certain cut-off distance, the average avalanche size would not have any dependence on $`L`$ in the large limit. Therefore, for a dissipative model to be critical, only a fraction $`f(L)`$ of grains should dissipate from the bulk and the rest through the boundary, such that, $`f(\mathrm{})=Lt_L\mathrm{}f(L)1`$. We present examples of both cases in the following. On an oriented square lattice with extension $`L`$, sites are either vacant or occupied by single grains in the stable state. System is periodic along the $`x`$ direction and the $`y`$ coordinate increases downward. Grains are dropped randomly. A toppling occurs only when the number of grains $`h_i>1`$, the site $`i`$ is then vacated: $`h_i0`$. The system has a preferrence along the $`y`$ direction and the down-left and the down-right neighbours at the next row gets one grain each: $`h_jh_j+1`$. In a toppling with height 2, grain number is conserved, where as, in a toppling with height 3, one grain dissipates from the system. Unlike the Directed Abelian Sandpile model (DASM) , our model is non-Abelian since sites are vacated in a toppling and we call it as the ‘Directed Dissipative Sandpile Model’ (DDSM). In a parallel dynamics all toppling sites reside in a single row on a contiguous toppling line (TL). It has a variable length since the fluctuations take place only at the two ends. If the TL has a length $`\mathrm{}`$ at time $`t`$, it would have the length at least $`\mathrm{}1`$ at time $`(t+1)`$, since all inner $`(\mathrm{}1)`$ sites will get two grains each and will topple again. If either or both neighbours of the end sites are occupied, the TL collects the grains into it, and grows in length to $`\mathrm{}`$ or, $`\mathrm{}+1`$. However, if these neighbours are vacant, TL fills them and shrinks in length. Therefore, the two ends of the TL move in principle like two annihilating random walks starting from the same point. The avalanche terminates when they meet and annihilate each other . Grains are randomly dropped in two ways: In case A, they are dropped only on the top row at $`y=1`$, and in case B, they are dropped everywhere. Grains dissipate through the boundary at $`y=L`$. First, we consider case A and a stable configuration is shown in Fig. 1(a). Grains are marked by black dots, where as vacant sites are made blank. Lines of grains in the shape of ‘V’ are mostly observed. This is because, due to the bulk dissipation, the density is so low that the TL moves almost in a deterministic manner. A ‘V’ is formed by the movement of a TL through a vacant region. In this case the TL uniformly shrinks, leaving behind a trail of two converging lines of occupied sites at the two ends. However, a TL may also propagate in a ‘$`\mathrm{\Lambda }`$’ between two Vs. In that case it uniformly grows in length, deletes two sides of two Vs up to their lowest points and then starts shrinking, producing two converging lines which finally make a bigger V. In this way bigger V shapes are generated at the expense of smaller Vs, which finally reaches the boundary at the bottom and dissipates. Such almost deterministic dynamics makes avalanches of rectangular shape in general, but mostly they are squares. An avalanche deletes all occupied sites through which it passes. No dissipation occurs on the first two rows where the average density is 1/2 as in DASM . It then decreases with $`y`$ as a power law: $`\rho (y)y^\alpha `$. A system of size $`L=13000`$ is simulated by dropping $`2\times 10^9`$ grains. We plot $`\rho (y)y^{1.012}`$ with $`y`$ on a double logarithmic scale, the curve is horizontal for the large $`y`$ values, giving $`\alpha =1.012\pm 0.030`$ (Fig. 2). Avalanche size $`s`$ is the number of sites toppled in an avalanche. Simulation results indicate that the cumulative probability distribution of $`s`$ follows a power law: $`P(s)s^{1\tau _s}`$ with $`\tau _s=1.52\pm 0.03`$. The life-time $`t`$ of an avalanche is its vertical extension along the preferred direction and also follows similar power law: $`P(t)t^{1\tau _t}`$ with $`\tau _t=2.027\pm 0.030`$ (Fig. 2). The average avalanche size $`<s(t)>`$ varies with life-time $`t`$ as $`<s(t)>t^{\gamma _{st}}`$ with $`\gamma _{st}=2.01\pm 0.03`$. Since $`s,t`$ are two measures of the same random avalanche cluster, they are necessarily dependent, and are related by the scaling relation: $`\gamma _{st}=(\tau _t1)/(\tau _s1)`$. We explain these results in the following way. It is reasonable to assume that most of the avalanches are of rectangular shapes, which implies that $`\gamma _{st}=2`$. Now, if the TL has a width $`w(t^{})`$ at the intermediate time $`t^{}`$, then $`2w(t^{})`$ grains cross that row $`y=t^{}`$. The dissipation flux per grain can be divided into ‘bulk-flux’ and ‘boundary-flux’. All grains crossed by the TL, except at its two ends, dissipate. Therefore, the density and the system size $`L`$ controll the share between the bulk and the boundary fluxes. The constant average boundary-flux through the row at $`y`$ is $`<w(y)>y^{1\tau _t}`$ which gives $`<w(y)>y^{\tau _t1}`$. But, since average avalanche size of life-time $`t`$ is $`<s(t)>=_0^tw(t^{})𝑑t^{}=t^{\tau _t}`$, we get $`\gamma _{st}=\tau _t=2`$ and $`\tau _s`$=3/2. We numerically check the relation $`<w(y)>=ky`$ and find a nice straight line with slope $`k=0.312\pm 0.001`$ and the correlation coefficient 0.999. Since the density of the system decreases with increasing $`y`$, we expect that the average dissipation also should decrease with increasing $`y`$. The fraction $`\sigma (y)`$ of total number of grains dissipated in the $`y`$-th row varies as: $`\sigma (y)y^\beta `$, where $`\beta =1.184\pm 0.030`$ (Fig. 2). Therefore the bulk-flux $`f(L)`$ should vary as: $`f(L)=f(\mathrm{})CL^x`$ with $`x=\beta 1`$. The exponents $`x`$ is estimated independently by plotting $`f(L)`$ vs. $`L^x`$ for different $`x`$ values and the best value obtained is $`x=0.17\pm 0.03`$ and $`f(\mathrm{})=0.634\pm 0.010`$. Now we consider the case B (Fig. 1(b)). The local density fluctuates widely since an avalanche sweeps the region it passes and the local density in this region re-starts growing from the scratch. Since the grains are dropped everywhere uniformly and the bulk dissipation depends on the density, we expect that there should be a saturation region where the average density is constant. The density $`\rho (y)`$ decreases from 1/2 and then saturates to a constant value $`0.1543\pm 0.0010`$ around $`y_c100`$. The rate of dissipation $`\sigma (y)`$, initially increases but finally saturates to the uniform dissipation limit: $`\sigma (y)=C/L`$, with $`C=1.01`$ (Fig. 3). The bulk-flux $`f(L)`$ asymptotically reaches to $`f(\mathrm{})`$=1 as 1/L. It turned out that the system has two regions. The high density region extends from the top to $`y_c`$ and the saturation region from $`y_c`$ to $`L`$. We separately collect the distribution data for the avalanches originated in these two regions. For the high density region, the $`\tau _s1.5`$ and $`\tau _t2.0`$ are obtained as in the case A and $`<s(L)>L`$ and $`<t(L)>\mathrm{log}L`$ are observed. Linearity in $`<w(t)>=k_1t`$ is still obeyed with $`k_1`$=0.1 and $`\gamma _{st}2`$ is obtained again. However, for the saturation region, plots of the distribution data showed two regions: an initial high slope $`\tau _s^s2.5`$ for the small $`s`$ values, followed by a slope $`\tau _s^l1.5`$ for the large $`s`$ values. We explain the large value of $`\tau _s^s`$ is due to those small avalanches, which grow on an empty region swept out by a previous large avalanche. When this region reaches the steady state, the avalanches get the usual exponent $`\tau _s^l`$ = 1.5 for large $`s`$ values. We see that both $`<s(L)>`$ and $`<t(L)>`$ have constant values independent of L. The total distribution, has the behaviour of the saturated regions, since the avalanches generated in this region have larger weights. We now look into the effect of the boundary on dissipative models in more detail. In a conservative sandpile model with periodic boundary condition, the total mass of the system grows up indefinitely. Very soon, an ‘Infinite Avalanche’ starts which never terminates. For ASM on a periodic square lattice, the same height configuration repeats at certain interval, toppling all sites exactly once. The period is of the order of $`L`$ and is dependent on the initial configuration. We show such a $`2\times 2`$ system in Fig. 5(a). Next we consider the dissipative ASM on the same lattice. After some initial dissipation, this model also creates a periodic infinite avalanche which is dissipationless (Fig. 5(b)). We now test our DDSM on periodic system, by making the $`y`$ direction also periodic. We observe again dissipationless infinite avalanches in both cases A and B. A TL in the form of a ring moves indefinitely with uniform speed on the empty torus. An infinite avalanche has to be dissipationless after some time, otherwise it will make the whole system empty. If a dissipative model in a periodic system has no infinite avalanche, it indicates that even the largest avalanche is not as big as the system. Therefore for the same dissipative model in the open system, the boundary should have no effect on the avalanche sizes, leading to sub-critical states. We conjecture that: A dissipative model will not show self-organized criticality if the same model on a periodic system has no infinite avalanches. To verify this conjecture, we check some examples. The probabilistic dissipation model , the stochastic OFC model and the dissipative two states sandpile model all are non-critical on open boundary systems and do not have infinite avalanches on the periodic systems. However, the random creation-dissipation model in , the dissipative ASM , the cases of DDSM as described in this paper, lead to SOC states with open boundary and also have periodic avalanches on the periodic systems. Finally, we check that the deterministic OFC model also does not produce any infinite avalanche on the periodic system for any $`\alpha <1/4`$. Therefore, according to our conjecture, deterministic OFC model is not critical, which is against the general belief. In a recent preprint, it has been claimed that the deterministic OFC model is critical in the conservative regime only . We acknowledge D. Dhar with thanks for the critical reading of the manuscript and for many useful comments. S. S. M. thanks S. Krishnamurthy, S. Roux and S. Zapperi for usefull discussions. R.C. aknowledges financial support under the European network project FMRXCT980183. Electronic address for correspondance: manna@boson.bose.res.in
no-problem/9909/hep-lat9909007.html
ar5iv
text
# Fermion Cluster Algorithms ## 1 INTRODUCTION Fermions are quite difficult to deal with in Monte-Carlo methods. The main problem is the Pauli principle which introduces negative Boltzmann weights when fermions are treated physically as particles traveling in time. Until a few months ago the only known approach was to integrate them out and hope that the remaining problem is described by a positive Boltzmann weight. This approach has been successful in a few cases of physical interest like lattice QCD at zero chemical potential and zero vacuum angle. Unfortunately, even in these cases the Hybrid-Monte-Carlo methods slow down dramatically due to critical slowing down in the chiral limit . Recently a new class of fermion algorithms have been discovered where the fermions are treated as physical particles traveling in time . Such world line algorithms had been suggested in the past, however no solution to the fermion sign problem was found. We have been able to solve this problem using cluster algorithms in a limited class of models. The progress is due to a better understanding of the relation between the topology of clusters and the effect of their flip on the fermion permutation sign. In certain cases, this knowledge can be used to completely eliminate the fermion sign problem and at the same time gain from the ability of cluster algorithms to beat critical slowing down. Thus, when successful these algorithms lead to the most ideal fermion algorithms discovered so far. ## 2 SIGN PROBLEM AND SOLUTION The fermionic cluster algorithms that we have discovered are based on the cluster algorithm for a bosonic quantum spin-1/2 model. It is well known that a site with a fermion can be identified with a spin “up” state and an empty site with a spin “down” state up to sign factors that arise due to the Pauli principle. For every spin configuration the spin “up” states can be used to track fermion world lines which are closed in Euclidean time and describe a permutation of fermions. The sign of this permutation is exactly the same as the product of sign factors that arise from the Pauli principle. Hence it is always possible to absorb the physics of the Pauli principle into the sign of the Boltzmann weight of the spin configuration. Mathematically, the fermionic partition $`Z_f`$ can be written as $$Z_f=\underset{[𝒞]}{}\mathrm{Sign}[𝒞]W_b[𝒞],Z_b=\underset{[𝒞]}{}W_b[𝒞],$$ (1) where $`Z_b`$ is the partition function of the quantum spin model written as a sum over configurations $`𝒞`$, defined by a set $`𝒞_i,i=1,2,\mathrm{}N_C`$ of connected spin clusters, with a positive Boltzmann weight $`W_b[𝒞]`$. In general the sign factor in $`Z_f`$ is a product of the fermion permutation sign $`\mathrm{Sign}_f[𝒞]`$, discussed above and other local sign factors $`\mathrm{Sign}_b[𝒞]`$, that may be necessary to relate the fermionic and the bosonic models. Further, the existence of a cluster algorithm implies that the weight $`W_b[𝒞]`$ remains the same if all the spins of any connected spin-cluster are flipped. On the other hand the $`2^{N_C}`$ degenerate configurations, obtained by flipping the clusters, can have different sign factors $`\mathrm{Sign}[𝒞]`$. Although the above method of writing the fermionic model in terms of clusters of a bosonic model is well known, the freedom in choosing the bosonic weights $`W_b[𝒞]`$ and local bosonic sign factors $`\mathrm{Sign}_b[𝒞]`$ had not been exploited until now. We have discovered that it is always possible to be clever in using this freedom so that the connected spin-clusters contribute independently to the sign of the configuration, i.e., the overall sign can always be written as $`\mathrm{Sign}[𝒞]=_{i=1}^{N_C}\mathrm{Sign}(𝒞_i)`$, where $`\mathrm{Sign}(𝒞_i)`$ is the sign associated with a connected cluster of spins. If $`\mathrm{Sign}[𝒞_i]`$ changes when the spins are flipped the cluster is called a meron<sup>1</sup><sup>1</sup>1This word was originally used in to describe clusters with the same property in a classical $`O(3)`$ model.. Thus, meron clusters identify two spin configurations with the same weight but opposite signs and hence only non-meron clusters contribute to the partition function. It is always possible to include a Metropolis decision during the cluster formation process to suppress meron clusters in a controlled way. In certain models the spins within any connected cluster can always be flipped to a reference pattern $`𝒞_i^{\mathrm{ref}}`$ such that $`\mathrm{Sign}[𝒞_i^{\mathrm{ref}}]=1`$. In such cases the average of $`\mathrm{Sign}[𝒞]`$ under all the $`2^{N_C}`$ flips of connected spin-clusters is $`1`$ in the zero meron sector and this solves the sign problem completely. ## 3 MODELS AND ALGORITHMS There are a variety of models that can be solved using the above ideas. Here we discuss how these ideas can be applied to solve free non-relativistic fermions on a d-dimensional hyper-cubic lattice described by the Hamiltonian, $$H=\underset{x,i}{}\left(n_x+n_{x+\widehat{i}}[c_x^+c_{x+\widehat{i}}+c_{x+\widehat{i}}^+c_x]\right),$$ (2) where $`n_x=c_x^+c_x`$ is the fermionic occupation number and $`c_x^{}`$ and $`c_x`$ are creation and annihilation operators. This model was originally considered in . However, due to a wrong choice of $`W_b[𝒞]`$ and $`\mathrm{Sign}_b[𝒞]`$, even this simple model appeared intractable numerically. Here we show how a different choice of these factors solves the problem completely. Following we construct the partition function $`Z_f=\mathrm{Tr}[\mathrm{exp}(\beta H)]`$ by discretizing the Euclidean time axis into $`2d\times M`$ steps such that at a given time slice each spin interacts with only one neighboring spin. Thus, the Boltzmann weight of any configuration of fermion occupation numbers is a product of two-spin transfer matrix elements up to the global fermion permutation sign. Figure 1 illustrates a typical configuration in the path integral with the shaded regions representing the two spin interactions. In order to construct the cluster algorithm we next introduce bond variables in addition to spin variables to describe connected spin-clusters and find transfer matrix elements for these new types of connected spin configurations such that the partition function remains the same. A given spin configuration can represent many spin-cluster configurations all of which have the same global fermion permutation sign. If we allow the transfer matrix elements of these new configurations to be negative there is a lot of freedom in choosing the weights and signs. Figure 2., illustrates a particular choice such that summing over the bond variables reproduces the weights of spin configurations. An interesting feature of the spin connection rules of figure 2 is that all spins in a connected cluster are of the same type. Further, the negative sign associated with the cross bond configuration with all “up” spins is an extra local negative sign that can be absorbed into $`\mathrm{Sign}_b[𝒞]`$, where as the global fermion permutation sign is absorbed into the factor $`\mathrm{Sign}_f[𝒞]`$. The model described by the magnitude of the weights of figure 2, is the spin-1/2 ferromagnetic Heisenberg model and can be updated using a cluster algorithm. Remarkably, $`\mathrm{Sign}[𝒞]=\mathrm{Sign}_f[𝒞]\mathrm{Sign}_b[𝒞]`$ has all the desirable properties described in the previous section. The sign of a connected spin cluster is set to $`1`$ if it is a cluster of “down” spins or if the temporal winding of the cluster is odd. Otherwise the sign of that cluster is $`1`$. Thus, clusters with an even temporal winding are merons. In summary we have shown that clusters generated with the cluster algorithm of the ferromagnetic quantum spin-1/2 Heisenberg model, can also describe free non-relativistic fermions whose Hamiltonian is given in eq.(2), if we throw away clusters with an even temporal winding. In order to demonstrate the correctness of this observation we have calculated the two point fermion Greens function defined as $$G(x,y;t)=\frac{1}{Z_f}\mathrm{Tr}\left(\mathrm{e}^{[(\beta t)H]}c_x\mathrm{e}^{[tH]}c_y^+\right),$$ (3) on a $`4\times 4\times 4`$ lattice at $`\beta =1`$ and $`M=16`$. Figure 3 shows this function in momentum space for $`\stackrel{}{p}=(0,0,0)`$ and $`\stackrel{}{p}=(\pi ,0,0)`$. Evidently, the exact zero mode of $`H`$ at zero momentum does not lead to any complications, unlike conjugate gradient methods. It is possible to extend the model to include short range repulsive interactions. In addition, the above ideas are also applicable in a variety of models with a rich phase structure. One such model has a finite temperature chiral phase transition and was studied extensively in . The cluster algorithm of the anti-ferromagnetic Heisenberg model plays an important role there. Remarkably one can work directly in the chiral limit and no critical slowing down is observed. All this shows that fermion cluster algorithms can provide a very elegant method to solve fermionic field theories numerically. I wish to thank Uwe Wiese for his collaboration and many discussions at various stages of this work.
no-problem/9909/hep-lat9909132.html
ar5iv
text
# UNIGRAZ-UTP-17-09-99BUTP-99/18 Effects of Topology in the Dirac Spectrum of Staggered Fermions Supported by Fonds zur Förderung der Wissenschaftlichen Forschung in Österreich, Project P11502-PHY. ## 1 Introduction Leutwyler and Smilga’s sum rules for the eigenvalues of the chiral Dirac operator for finite volume QCD suggest that the statistical fluctuations of the spectrum of this operator are universal in the infrared region; these should be described by a chiral Random Matrix Theory (chRMT) . (For a recent review on Random Matrix Theory in general cf. .) Universality has been proved extensively in chRMT; for the case of QCD, see . Explicit lattice calculations may provide an answer to the question, whether QCD lies really inside one of the universality classes of chRMT. Chiral symmetry, which is the main ingredient here, may be effectively realized on the lattice by the so-called “overlap fermions” and by the Fixed-Point (FP) fermion action , both satisfying the Ginsparg-Wilson condition. Ginsparg-Wilson fermions – in spite of their theoretical appeal – present however non-trivial technical difficulties in a dynamical set-up (for first experiences in this context, see ). Kogut-Susskind “staggered” fermions, which are the object of this letter, realize chirality only partially, but are also much simpler, and still represent the typical framework for lattice QCD calculations when chirality is relevant. However, this is the main conclusion of this letter, comparison with chRMT is in this case not straightforward and subject to additional restrictions (besides the usual ones well-known in the literature) applying as long as the lattice cut-off is finite. A class of problems comes from topology: when comparing lattice eigenvalue spectra with chRMT predictions, the validity of the index theorem for the Dirac operator is assumed, while the staggered Dirac operator has no exact zero modes and a sound fermionic definition of the topological charge is not possible. Another class of problems is due to the flavor structure of staggered fermions, which is different at finite cut-off and in the continuum limit. Each copy of staggered fermions describes in the continuum $`N_f=2^{d/2}`$ degenerate massless physical fermions; at finite cut-off the full chiral symmetry is broken into an abelian subgroup resembling the non-degenerate ($`N_f=1`$) situation, with the relevant difference that the symmetry is here anomaly-free. We study these issues in the simplified framework of the Schwinger model in the quenched and the dynamical situation. A particularity of two dimensions is that a continuous symmetry cannot be broken by the vacuum . Since the chiral symmetry at finite cut-off is non-anomalous though abelian, the usual fermion condensate vanishes even if staggered fermions act as just one flavor when comparing Dirac eigenvalue spectra with chRMT. With vanishing condensate, a different kind of universality should apply in the infrared part of the spectrum . In a Monte Carlo simulation, all this applies of course only for the unquenched (dynamical) data. For the “quenched” model (corresponding to $`N_f=0`$) the usual chRMT universality is expected to hold . In this case, however, different complications come into play when the thermodynamic limit of the theory is taken . Some results concerning the quenched data have been discussed in . ## 2 Predictions from Random Matrix Theory The lower edge of the spectrum of the Dirac operator (i.e. its infra-red region) is expected to be universal at the scale of the level spacing $`\mathrm{\Delta }\lambda `$ (microscopic scale) $`\mathrm{\hspace{0.17em}1}/V`$. The ‘microscopic’ scaling variable is $`z=V\mathrm{\Sigma }\lambda `$, where $$\mathrm{\Sigma }\pi \underset{\lambda 0}{lim}\underset{V\mathrm{}}{lim}\rho (\lambda ),$$ (1) and $`\rho (\lambda )`$ denotes the associated spectral density per unit volume. When comparing the statistical properties of the Dirac operator with chRMT, $`\mathrm{\Sigma }`$ is an external scale parameter related to the dynamics of the physical system under study. The Banks-Casher formula implies $`\mathrm{\Sigma }=\overline{\psi }\psi `$. Three classes of universality are predicted by chRMT , corresponding to (chiral) orthogonal, unitary and symplectic ensembles (chOE, chUE and chSE respectively). The universal properties considered in this letter are the microscopic spectral density $$\rho _s(z)=\underset{V\mathrm{}}{lim}\frac{1}{\mathrm{\Sigma }}\rho \left(\frac{z}{V\mathrm{\Sigma }}\right)$$ (2) and the probability distribution of the smallest eigenvalue $`P(\lambda _{\mathrm{min}})`$. These distributions depend on the topological charge $`\nu `$ of the background gauge configuration and on the number of flavors $`N_f`$ of quarks included in the dynamics. We also checked the level spacing statistics . ## 3 Statistical properties of the spectrum Staggered fermions partially realize chiral symmetry on the lattice. One species of staggered fermion describes $`N_f=2^{d/2}`$ Dirac fermions in the continuum limit. For finite lattice spacing the chiral symmetry $`U_V(2^{d/2})\times U_A(2^{d/2})`$ is however broken down to a $`U_V(1)\times U_A(1)`$ non-anomalous sub-symmetry, which accounts for the reflection-invariance for the purely imaginary spectrum. In chRMT, the number of dynamical fermions $`N_f`$ and the topological charge $`\nu `$ of the background configuration enter as external parameters. Our (compact) gauge configurations were sampled in the quenched set-up, i.e. according to the pure gauge measure (given by the standard plaquette action). This corresponds in chRMT to the case $`N_f=0`$. Assuming no extra symmetry for the Dirac operator (besides the one mentioned above) in our two-dimensional set-up, the statistical fluctuations of the spectrum in the infrared region are expected to be described by chUE; this was confirmed by previous results with the Fixed Point and Neuberger’s overlap action, where however, deviations (i.e. a distribution resembling chSE) were observed whenever the physical volume was too small. In order to explore the dynamics of fermions, we then re-weighted the quenched configurations with the fermion determinant. For $`d=4`$ this procedure usually introduces large statistical fluctuations; in our two-dimensional model these fluctuations did not appear to be too harmful . Since the chiral symmetry of lattice staggered fermions is abelian, one expects that the dynamical situation is described by a chRMT with $`N_f=1`$. The two-dimensional situation is however peculiar due to the Mermin-Wagner-Coleman theorem stating that a continuous symmetry cannot be broken by the vacuum; only an anomaly can do that. Since the operator $`\overline{\psi }(x)\psi (x)`$ is not invariant under the residual (non-anomalous) abelian chiral symmetry of staggered fermions, its expectation value vanishes . (One can argue that the simplest observable signaling the breaking of the anomalous axial symmetry is in this context $`\overline{u}u\overline{d}d`$.) With vanishing fermion condensate, the universality class of chUE does not apply anymore, and a new kind of universality holds . The validity of the $`N_f=1`$ chRMT predictions is restricted. In the continuum limit the irrelevant terms of the action breaking the full symmetry vanish and a transition to $`N_f=2`$ is due to occur with doubling of the spectrum. In this limit, two degenerate fermions described by two-component spinors are the effective degrees of freedom, the two-flavor (massless) Schwinger model being the underlying continuum theory . RMT predictions depend also on the topological charge of the background gauge configuration. In the continuum the topological charge is related to the index of the Dirac operator (the number of zero modes counted according to their chirality). On the lattice, a consistent definition of the topological charge in terms of the index of the Dirac operator is possible only for Ginsparg-Wilson (i.e. Overlap and FP) fermions . For staggered fermions, away from the continuum limit, the Dirac operator has no exact zero modes, and the naive expectation is that all configurations show spectral distributions described by chRMT predictions for the trivial sector . However, for lattices that are fine enough one should attain the continuum situation. We will explore this hypothetical scenario and check possible systematic effects when comparing the spectrum of the Dirac operator with chRMT predictions. Lacking (in this case) a consistent fermionic definition of the topological charge $`\nu `$, we rely on the geometric definition of the gauge configurations. In the analysis we distinguish between the sectors with trivial ($`\nu =0`$) and non-trivial topological charge. ### 3.1 The simulation We considered statistically independent gauge configurations for lattices of size $`16^2`$ ($`\beta `$ = 2, 4, 6; 10000 configurations); $`24^2`$ ($`\beta `$ = 2, 4, 6, 8 ; 5000 configurations) and $`32^2`$ ( $`\beta `$ = 2, 4, 6; 5000 configurations). Even-odd preconditioning and standard LAPACK routines were used to diagonalize the staggered Dirac operator. #### The trivial sector. Fig. 1 shows the distribution of the smallest eigenvalue $`P(\lambda _{\mathrm{min}})`$ for some lattice volumes. The histograms are obtained by choosing configurations with geometrical topological charge $`\nu =0`$; this reduces the statistics to $`30\%10\%`$ of the original one depending on $`L`$ and $`\beta `$. We observe consistency with chUE except for $`L=16`$, $`\beta 6`$, where the physical volume is likely to be too small for chRMT to apply. For the fixed point and Neuberger’s action an apparent transition to chSE, not observed here, took place for small lattices and/or high values of $`\beta `$ . The fit of the chUE distribution provides an estimate for $`\mathrm{\Sigma }`$ which is given in Table 1 and in Fig. 5, left (we are using $`ea=1/\sqrt{\beta }`$ as abscissa, in the ordinate we report $`\mathrm{\Sigma }\sqrt{\beta }=\overline{\psi }\psi _{\mathrm{cont}}/e`$; the equations hold in the weak coupling limit of the Schwinger model with one flavor). Following RMT the spectrum is to be separated in a fluctuation part (conjectured to follow chRMT predictions) and a smooth background. The information on the chiral condensate is contained in both. Removing the smooth background is done by “unfolding”: the eigenvalues are mapped to a variable $`z`$, such that the average level spacing is constant. In Fig. 2 we plot the microscopic spectral density $`\rho _s(z)`$ for the unfolded eigenvalues for two situations. In the unfolding process the average value of the spectral density near $`\lambda =0`$ provides a direct estimate for $`\mathrm{\Sigma }`$ (see (1)). Neglecting finite-size effects, this value should agree with the number obtained from the smallest eigenvalue distribution in the scaling region of chRMT. We find discrepancies between the two determinations for $`\beta 4`$, the “background” value of $`\mathrm{\Sigma }`$ (i.e. $`\pi \rho (0)`$) coming out larger; as a consequence of this discrepancy, the spectral density does not follow the predicted shape anymore (Fig. 2, right). We interpret this as due to the transition to the full flavor-symmetric situation of the continuum limit, which in the quenched case implies trivial doubling of the spectrum. It has been suggested that the spectral density of the quenched Schwinger model should diverge exponentially with the volume for small eigenvalues; our data do not allow us to draw any firm statement concerning this. We only observe that for $`\beta =`$2 and 4, $`L=24`$ and 32 give consistent values of $`\mathrm{\Sigma }`$, while it is no more the case for the larger value of $`\beta =6`$, this indicating a slowing down of the scaling behavior in the weak coupling region. #### Topological sector. Fig. 3 shows the distributions of the first and second eigenvalue for configurations with $`|\nu |=1`$. Ideally, the first eigenvalue ought to be a zero mode, while the second eigenvalue should follow the predictions of chUE for the smallest eigenvalue for $`|\nu |=1`$; these may be obtained just using the values of $`\mathrm{\Sigma }`$ found in the trivial sector in the formulas for the smallest eigenvalue distribution with $`|\nu |=1`$ (the continuous line in the Fig. 3). What we see instead is that for small $`\beta `$ the first eigenvalue follows the chUE statistics of the smallest eigenvalue in the trivial sector (dashed curve in the figure). Such a behavior was also observed for SU(3), d=4 . Increasing $`\beta `$, the first eigenvalue peaks closer and closer to zero and the second smallest eigenvalue approaches the predictions of chRMT for the smallest eigenvalue for $`|\nu |=1`$ ($`\beta 4`$). We conclude that only towards the continuum limit the zero modes correlate clearly with the topological sector of the gauge configuration. In order to check the systematics of the possible misidentification, we repeated the fits with chRMT shapes without selecting the trivial topological sector: even for the smallest value of $`\beta `$ at our disposal, $`\beta =2`$, we always obtain considerably larger values of $`\chi ^2/d.o.f.`$ (data with asterisk in Table 1) and the fitting values of $`\mathrm{\Sigma }`$ are inconsistent with the determination in the trivial sector. We conclude, that even for moderate values of $`\beta `$, the identification of the topological sector is an essential ingredient for the correct comparison with chRMT. #### Re-weighting. Of high interest in chRMT is the dynamics of fermions. In order to account for dynamical fermions, we re-weight the quenched gauge configurations with the fermion determinant; in the case of the statistics $`P(\lambda _{\mathrm{min}})`$, for example, this amounts to the following redefinition: $$P(\lambda _{\mathrm{min}})=\frac{_C\delta (\lambda _{\mathrm{min}}\lambda _{\mathrm{min}}(C))\mathrm{det}𝒟(C)}{_C\mathrm{det}𝒟(C)},$$ (3) where $`\lambda _{\mathrm{min}}(C)`$ denotes the smallest eigenvalue for the configuration $`C`$. The fluctuation of the determinant introduces additional fluctuations in the studied quantity (in our case the error typically doubles), which can be traced back to the effective loss of statistics of the gauge sample. Loss of statistics is also caused by the depletion of the trivial sector (the one under study) on large lattices. As already discussed in , in the quenched situation we find perfect chUE behavior for the level spacing distribution for all parameter values studied. This observation persists also for the re-weighted, unquenched data (e.g. Fig. 4 left). For the level spacing we always find chUE distribution. Such a behavior in a situation of vanishing fermion condensate was found already in the Coulomb phase of compact four dimensional QED . In Fig. 4 (right) we report the distribution of the smallest eigenvalue for $`L=16`$, $`\beta =4`$ and find it clearly different from the quenched distribution. As expected from the discussion of Sec. 3 neither the $`N_f=1`$ chUE shape nor the $`N_f=2`$ succeed in fitting the distribution. The two different fits result in different values of $`\mathrm{\Sigma }`$ reported in Table 1 and Fig. 5, right. In both cases the values of $`\chi ^2/d.o.f.`$ of the fit are rather large, confirming that chUE does not give the right prediction. The fitted values of $`\mathrm{\Sigma }`$ are well above zero. As discussed, strictly speaking $`\mathrm{\Sigma }`$ ought to be zero, a different chRMT and universality class applying. In the (true) one flavor situation the continuum condensate is $`\overline{\psi }\psi _{\mathrm{cont}}/e0.15989`$; this situation is described by one copy of Overlap or FP fermions . ## 4 Discussion We conclude that for a careful RMT analysis of the eigenvalue spectrum of the staggered Dirac operator the identification of the topological sector is advisable. Towards the continuum limit the topologically non-trivial sectors exhibit an extra peak at small eigenvalues, which we attribute to the “want-to-be” zero modes. For a recent discussion on the relevance of quasi-zero modes and their chirality in the context of staggered fermions cf. . We identified problems due to the general ambiguity of staggered fermions between the choice $`N_f=1`$ applying for finite lattice spacing and $`N_f=2^{d/2}`$ applying in the continuum limit. For example, the quenched microscopic spectral density clearly deviates from chUE for $`\beta 4`$, because the “background” estimate of $`\mathrm{\Sigma }`$, $`\pi \rho (0)`$, does not match with the “microscopic” one as can be obtained e.g. from the smallest eigenvalue distribution. Additional unexpected features come into play because of our two-dimensional context: the Mermin-Wagner-Coleman theorem implies a vanishing fermion condensate with the result that chUE fails in the case of dynamical fermions. This is indicated by our data, even if the statistics is not good enough to draw firm conclusions. This scenario is quite different from that found in where the Schwinger model with Ginsparg-Wilson fermions was analyzed; there, genuine single-flavored fermions where described. Acknowledgment: We want to thank C. Gattringer, I. Montvay and K. Splittorff for helpful discussions. F.F. acknowledges support from Schweizerischer Nationalfonds.
no-problem/9909/nucl-th9909041.html
ar5iv
text
# Chiral Corrections to Baryon Masses Calculated within Lattice QCD ## 1 Introduction Chiral symmetry requires that the nucleon mass has the form $$m_N(m_\pi )=m_N(0)+\alpha m_\pi ^2+\beta m_\pi ^3+\gamma m_\pi ^4\mathrm{ln}m_\pi +\mathrm{},$$ for small $`m_\pi `$, where $`m_N(0)`$, $`\alpha `$, $`\beta `$, and $`\gamma `$ are functions of the strong coupling constant $`\alpha _s(\mu )`$. Recent work has shown that using physical insights from chiral perturbation theory and heavy quark effective theory one can derive new functional forms which describe the extrapolation of light baryon masses as functions of the pion mass $`(m_\pi )`$. These forms are applicable beyond the chiral perturbative regime and have been compared successfully with predictions from the Cloudy Bag Model and recent dynamical fermion lattice QCD calculations. ## 2 Analyticity By now it is well established that chiral symmetry is dynamically broken in QCD and that the pion is almost a Goldstone boson. It is strongly coupled to baryons and therefore plays a significant role in the $`N`$ and $`\mathrm{\Delta }`$ self energies. In the limit where the baryons are heavy, the pion-induced self energies of the $`N`$ and $`\mathrm{\Delta }`$, to one loop, are given by the processes shown in Fig. 1(a–d). We label these by $`\sigma _{NN}`$, $`\sigma _{N\mathrm{\Delta }}`$, $`\sigma _{\mathrm{\Delta }N}`$, and $`\sigma _{\mathrm{\Delta }\mathrm{\Delta }}`$. Note that we have restricted the intermediate baryon states to those most strongly coupled, namely the $`N`$ and $`\mathrm{\Delta }`$ states. Other intermediate states are suppressed by the baryon form factor describing the extended nature of baryons. The leading non-analytic contribution (LNAC) of these self energy diagrams is associated with the infrared behavior of the corresponding integrals – i.e., the behavior as the loop momentum $`k0`$. As a consequence, it should not depend on the details of a high momentum cut-off, or form factor. In particular, it is sufficient for studying the LNAC to evaluate the self energy integrals using a simple sharp cut-off, $`u(k)=\theta (\mathrm{\Lambda }k)`$ as the choice of form factor. The explicit forms of the self energy contributions for $`\sigma _{NN}`$, $`\sigma _{N\mathrm{\Delta }}`$ and so on are given in . Moreover, there is little phenomenological difference between this step function and the more natural dipole, provided one can tune the cut-off parameter $`\mathrm{\Lambda }`$. The self energies involving transitions of $`N\mathrm{\Delta }`$ or $`\mathrm{\Delta }N`$ are characterized by a branch point at $`m_\pi =\mathrm{\Delta }M`$. ### 2.1 Chiral Limit The leading non-analytic (LNA) terms are those which correspond to the lowest order non-analytic functions of $`m_q`$ – i.e., odd powers or logarithms of $`m_\pi `$. By expanding the expressions given in , we find that the LNA contributions to the nucleon/delta masses are in agreement with the well known results of $`\chi `$PT . Of course, our concern with respect to lattice QCD is not so much the behavior as $`m_\pi 0`$, but the extrapolation from high pion masses to the physical pion mass. In this context the branch point at $`m_\pi ^2=\mathrm{\Delta }M^2`$ is at least as important as the LNA behaviour near $`m_\pi =0`$. ### 2.2 Heavy Quark Limit Heavy quark effective theory suggests that as $`m_\pi \mathrm{}`$ the quarks become static and hadron masses become proportional to the quark mass. In this spirit, corrections are expected to be of order $`1/m_q`$ where $`m_q`$ is the heavy quark mass. Thus we would expect the pion induced self energy to vanish as $`1/m_q`$ as the pion mass increases. The presence of a fixed cut-off $`\mathrm{\Lambda }`$ acts to suppress the pion induced self energy for increasing pion masses. While some $`m_\pi ^2`$ dependence in $`\mathrm{\Lambda }`$ is expected, this is a second-order effect and does not alter this qualitative feature. Indeed, in the large $`m_\pi `$ limit of the equations, we find that they tend to zero at least as fast as $`1/m_\pi ^2`$. The agreement with both the chiral limit and expected behaviour in the heavy quark limit suggests the following functional form for the extrapolation of the nucleon mass : $$M_N=\alpha _N+\beta _Nm_\pi ^2+\sigma _{NN}(m_\pi ,\mathrm{\Lambda })+\sigma _{N\mathrm{\Delta }}(m_\pi ,\mathrm{\Lambda }).$$ (1) ## 3 Lattice Data Analysis We consider two independent lattice simulations of the $`N`$ and $`\mathrm{\Delta }`$ masses from CP-PACS and UKQCD . Both of these use improved actions to study baryon masses in full QCD with two light flavours. We find that the two data sets are consistent, provided one allows the parameters introducing the physical scale to float within systematic errors of 10%. We begin by considering the functional form suggested in Section 2 with the cut-off $`\mathrm{\Lambda }`$ fixed to the value determined by fitting CBM calculations. This is shown as the solid curve in Fig. 2. In order to perform model independent fits (i.e. with $`\mathrm{\Lambda }`$ unconstrained), it is essential to have lattice simulations at light quark masses approaching $`m_\pi ^20.1`$ GeV<sup>2</sup>. This fit is illustrated by the dash-dot curve. Common practice in the lattice community to use a polynomial expansion for the mass dependence of hadron masses. Motivated by $`\chi `$PT the lowest odd power of $`m_\pi `$ allowed is $`m_\pi ^3`$: $$M_N=\alpha +\beta m_\pi ^2+\gamma m_\pi ^3$$ (2) The result of such a fit for the $`N`$ is shown in the dashed curve of Fig. 2. The coefficient of the $`m_\pi ^3`$ term, which is the leading non-analytic term in the quark mass, in the three parameter fit is $`0.761`$. This disagrees with the coefficient of $`5.60`$ known from $`\chi `$PT (which is correctly incorporated in Eq. (1), the solid and dash-dot curves) by almost an order of magnitude. This clearly indicates the failings of such a simple fitting procedure. ## 4 Summary In the quest to connect lattice measurements with the physical regime, we have explored the quark mass dependence of the $`N`$ and $`\mathrm{\Delta }`$ baryon masses using arguments based on analyticity and heavy quark limits. We have determined a method to access quark masses beyond the regime of chiral perturbation theory. This method reproduces the leading non-analytic behavior of $`\chi `$PT and accounts for the internal structure for the baryon under investigation. We find that the leading non-analytic term of the chiral expansion dominates from the chiral limit up to the branch point at $`m_\pi =\mathrm{\Delta }M300`$ MeV, beyond which $`\chi `$PT breaks down. The predictions of the CBM, and two-flavour dynamical-fermion lattice QCD results, are succinctly described by the formulae derived in . The curvature around $`m_\pi =\mathrm{\Delta }M`$, neglected in previous extrapolations of the lattice data, leads to shifts in the extrapolated masses of the same order as the departure of lattice estimates from experimental measurements. ### Acknowledgments This work was supported in part by the Australian Research Council.
no-problem/9909/hep-ph9909436.html
ar5iv
text
# BUTP–99/17 TTP99–38 hep-ph/9909436 September 1999 Second Order Corrections to the Muon Lifetime and the Semileptonic 𝐵 Decay ## 1 Introduction The Fermi coupling constant, $`G_F`$, constitutes together with the electromagnetic coupling constant and the mass of the $`Z`$ boson the most precise input parameters of the Standard Model of elementary particle physics. $`G_F`$ is defined through the muon lifetime, and the decay of the muon, as a purely leptonic process, is rather clean — both experimentally and theoretically. The one-loop corrections of order $`\alpha `$ were computed more than 40 years ago whereas only recently the two-loop corrections of order $`\alpha ^2`$ have been evaluated . The large gap in time shows that this calculation is highly non-trivial. The inclusion of the two-loop terms removed the relative theoretical error of $`1.5\times 10^5`$ which was an estimate on the size of the missing corrections. The remaining error on $`G_F`$ now reads $`0.9\times 10^5`$ and is of pure experimental nature. Upcoming experiments will further improve the accuracy of the muon lifetime measurement and therefore the $`𝒪(\alpha ^2)`$ corrections to the muon decay are very important and constitute a crucial ingredient from the theoretical side. These facts make it desirable to have an independent check on the correctness of the $`𝒪(\alpha ^2)`$ result. We also want to mention that in a recent article optimization methods have been used in order to estimate the coefficient of order $`\alpha ^3`$. In view of the upcoming $`B`$ physics experiments the evaluation of quantum corrections to $`B`$ meson properties have become topical. In particular it is possible to use the semileptonic decay rate of the bottom quark in order to determine the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements with quite some accuracy. In an approximate expression for the $`𝒪(\alpha _s^2)`$ corrections of $`\mathrm{\Gamma }(bcl\overline{\nu }_l)`$ has been obtained where a non-vanishing charm quark mass has been included. In the decay $`bue\overline{\nu }_e`$ the mass of the $`u`$ quark can be neglected which reduces the calculation to the situation given in the muon decay. The only additional diagrams are those which arise due to the non-abelian structure of QCD. The results of order $`\alpha _s^2`$ have been obtained in . The corrections proportional to the number of light quarks have already been computed in . In this letter we confirm through an independent calculation the results of order $`\alpha ^2`$ to the muon decay and the one of order $`\alpha _s^2`$ to the semileptonic bottom quark decay . In the next section our method is discussed, in Section 3 the results are presented. ## 2 Method and Notation In the imaginary part of the muon propagator is computed up to the four-loop level. Recurrence relations based on the integration-by-parts technique are used in order to reduce the integrals to be evaluated to a minimal set — so-called master integrals. They are finally evaluated by computing expansions in the ratio of external momentum and internal muon mass. It is possible to take the on-shell limit and perform the infinite sum which finally leads to an exact result for the integrals. For concise reviews of expansion methods see e.g. . In contrast to our approach is based on an expansion of the full fermion propagator in the limit $`M^2`$ $``$ $`q^2,`$ (1) where $`q`$ is the external momentum and $`M`$ is the propagator mass of the muon and bottom quark, respectively<sup>1</sup><sup>1</sup>1There are, of course, diagrams that do not contain an internal propagator of mass $`M`$. These diagrams are computed without any expansion.. Throughout the whole paper we will neglect effects induced by the non-vanishing electron and up quark mass. The on-shell limit $`q^2M^2`$ will be performed afterwards with the help of Padé approximations. This, of course, only provides an approximation to the exact result. However, the integrals to be evaluated are simplified considerably. We will demonstrate that the accuracy obtained with our method is sufficient to check the existing results and enables the same reduction of the theoretical error on $`G_F`$. The notation is essentially adopted from , where corrections of $`𝒪(\alpha _s^2)`$ to the decay $`tWb`$ have been computed. For completeness we briefly repeat in this paper the main formulae. The decay rate — both for the muon and the bottom quark — can be written in the form $`\mathrm{\Gamma }`$ $`=`$ $`2M\text{Im}\left[zS_V^{OS}S_S^{OS}\right]|_{z=1},`$ (2) where $`S_S^{OS}=Z_2^{OS}Z_m^{OS}\left(1\mathrm{\Sigma }_S^0\right),`$ $`S_V^{OS}=Z_2^{OS}\left(1+\mathrm{\Sigma }_V^0\right),`$ (3) are functions of the variable $$z=\frac{q^2}{M^2}.$$ (4) $`M`$ is the on-shell mass. $`\mathrm{\Sigma }_S^0`$ and $`\mathrm{\Sigma }_V^0`$ represent the scalar and vector part of the corresponding fermion propagator, respectively. They are functions of the external momentum $`q`$ and the bare mass $`m^0`$ of the fermion under consideration. In our case they further depend on the bare electromagnetic coupling $`\alpha ^0`$ and the strong coupling constant $`\alpha _s^0`$, respectively, and are proportional to the square of the Fermi coupling constant, $`G_F^2`$. The mass renormalization constant, $`Z_m^{OS}`$, entering in (3) can be extracted from . In contrast, $`Z_2^{OS}`$ has to be evaluated in the limit $`M^2q^2`$ since the handling of $`Z_2^{OS}`$ is determined by the computation of the fermion propagator. As we are only interested in the imaginary part and furthermore consider only QED or QCD corrections to the leading order term the quantity $`Z_2^{OS}`$ has to be known up to two loops only. The result can be taken from . The renormalization of $`\alpha `$ and $`\alpha _s`$ proceeds in the usual way where we have chosen to renormalize also the electromagnetic coupling in a first step in the $`\overline{\mathrm{MS}}`$ scheme. In order to get reliable results it is necessary to compute as many terms as possible in the expansion parameter $`z`$. Subsequently a Padé approximation is applied which is at length described in . We just want to mention that before the Padé procedure a conformal mapping can be used which maps the complex $`z`$-plane into the interiour of the unit circle. Following Ref. we denote those results by $`\omega `$-Padés and the ones obtained without conformal mapping by $`z`$-Padés. Some Padé approximants develop poles inside the unit circle ($`|z|1`$ and $`|\omega |1`$, respectively) in conflict with the analycity of the exact result. In general we will discard such numbers in the following. In some cases, however, the pole coincides with a zero of the numerator up to several digits accuracy, and these Padé approximations will be included in our sample. To be precise: in addition to the Padé results without any poles inside the unit circle, we will use the ones where the poles are accompanied by zeros within a circle of radius 0.01, and the distance between the pole and the physically relevant point $`q^2/M^2=1`$ is larger than 0.1. The central values and the estimated uncertainty will be extracted from Padé results $`[m/n]`$ with $`m+n`$ not too small and $`|mn|2`$. The central value is obtained by averaging the Padé results and the uncertainty is given by the maximal deviation from the central value. It is convenient to parameterize the radiative corrections for the semileptonic bottom quark decay in the following form: $`\mathrm{\Gamma }(bue\overline{\nu }_e)`$ $`=`$ $`\mathrm{\Gamma }_b^0\left[A_b^{(0)}+{\displaystyle \frac{\alpha _s}{\pi }}C_FA_b^{(1)}+\left({\displaystyle \frac{\alpha _s}{\pi }}\right)^2A_b^{(2)}+\mathrm{}\right],`$ $`A_b^{(2)}`$ $`=`$ $`C_F^2A_{b,A}^{(2)}+C_AC_FA_{b,NA}^{(2)}+C_FTn_lA_{b,l}^{(2)}+C_FTA_{b,F}^{(2)},`$ (5) with $`\mathrm{\Gamma }_b^0=G_F^2M_b^5|V_{ub}|^2/(192\pi ^3)`$. For QCD the colour factors are given by $`C_F=4/3`$, $`C_A=3`$, and $`T=1/2`$. $`n_l`$ is the number of massless quark flavours and will be set to $`n_l=4`$ at the end. $`A_{b,A}^{(2)}`$ corresponds to the abelian part already present in QED, $`A_{b,NA}^{(2)}`$ represents the non-abelian contribution, and $`A_{b,l}^{(2)}`$ and $`A_{b,F}^{(2)}`$ denote the corrections involving a second fermion loop with massless and heavy quarks, respectively. In principle there is also a contribution involving a virtual top quark loop. It is, however, suppressed by $`M_b^2/M_t^2`$ and will thus be neglected. In Fig. 1 a representative diagram for each one of these four colour structures is pictured. In Eq. (5) $`\alpha _s=\alpha _s(\mu )`$ is defined with five active flavours. The analytic result for $`A_b^{(2)}`$ can be found in . An analogous expression to (5) can also be defined in the case of the muon decay $`\mathrm{\Gamma }(\mu \nu _\mu e\overline{\nu }_e)`$ $`=`$ $`\mathrm{\Gamma }_\mu ^0\left[A_\mu ^{(0)}+{\displaystyle \frac{\overline{\alpha }}{\pi }}A_\mu ^{(1)}+\left({\displaystyle \frac{\overline{\alpha }}{\pi }}\right)^2A_\mu ^{(2)}+\mathrm{}\right],`$ $`A_\mu ^{(2)}`$ $`=`$ $`A_{\mu ,\gamma \gamma }^{(2)}+A_{\mu ,e}^{(2)}+A_{\mu ,\mu }^{(2)},`$ (6) with $`\mathrm{\Gamma }_\mu ^0=G_F^2M_\mu ^5/(192\pi ^3)`$. $`\overline{\alpha }=\overline{\alpha }(\mu )`$ represents the electromagnetic coupling in the $`\overline{\mathrm{MS}}`$ scheme. $`A_{\mu ,\gamma \gamma }^{(2)}`$ represents the purely photonic corrections whereas $`A_{\mu ,e}^{(2)}`$ and $`A_{\mu ,\mu }^{(2)}`$ contain an additional electron and muon loop, respectively. The contribution involving a virtual $`\tau `$ loop is not listed in (6) as it is suppressed by $`M_\mu ^2/M_\tau ^2`$ and almost four orders of magnitudes smaller than the other terms . Let us in the following describe our method used for the practical calculation in the case of the $`\mu `$ decay. The difference to the quark decay consists only in the transition from QED to QCD which increases the number of diagrams and makes it necessary to include the colour factors; the idea is, however, applicable in the very same way. Following common practice, we investigate the effective theory where the $`W`$ boson is integrated out. The QED corrections to the resulting Fermi contact interaction were shown to be finite to all orders . It is quite advantageous to perform a Fierz transformation which for a pure $`VA`$ theory has the consequence that afterwards the two neutrino lines appear in the same fermion trace. Thus the QED corrections only affect the fermion trace involving the muon and the electron. This also provides some simplifications in the treatment to $`\gamma _5`$ since in the case of vanishing electron mass a fully anticommuting prescription can be used. As described above we consider the fermion two-point functions and compute the imaginary part arising from the intermediate states with two neutrinos and the electron. As a consequence already for the Born result a two-loop diagram has to be considered. However, it turns out that the loop integration connected to the two neutrino lines can be performed immediately as it constitutes a massless two-point function. This is also the case after allowing for additional photonic corrections. As a result one encounters in the resulting diagram a propagator with one of the momenta raised to power $`\epsilon `$ where $`D=42\epsilon `$ is the space-time dimension. This slightly increases the difficulty of the computation of the resulting diagrams. Especially for the order $`\alpha ^2`$ corrections, where the original four-loop diagrams are reduced to three-loop ones with non-integer powers of denominators, it is a priori not clear that these integrals can be solved analytically. However, it turns out that for the topologies needed in our case this is indeed possible. For the computation of the massless two-point functions we have used the package MINCER . Slight modifications enabled us to use this package also for the computation of the new type of integrals. The calculation is performed with the help of the package GEFICOM . It uses QGRAF for the generation of the diagrams and EXP for the application of the asymptotic expansion procedures. For more technical details we refer to a recent review concerned with the automatic computation of Feynman diagrams . The dispersive and absorbtive part of the fermion self energies are gauge dependent for $`q^2M^2`$. Hence the dependence on the gauge parameter $`\xi `$ in Eq. (2) only drops out after summing infinitely many terms in the expansion around $`z=0`$ and setting $`z=1`$. Since we are only dealing with a limited number of terms, our approximate results will still depend on the choice of $`\xi `$ even after taking $`z1`$. It is clear that with extreme values of $`\xi `$ almost any number could be produced. Thus the question arises which value of $`\xi `$ should be taken in order to arrive at a reliable prediction for the decay rates. As the one-loop corrections can be evaluated for an arbitrary gauge parameter an extensive study can be performed and we can gain some hints for the choice of $`\xi `$ at order $`\alpha ^2`$. ## 3 Results Let us in a first step present the results for the muon decay and afterwards discuss the additional diagrams necessary for the QCD corrections to the bottom quark decay. The lowest order (Born) diagram can be computed directly. In this case the electron mass can be chosen different from zero and an expansion in $`M_e^2/M_\mu ^2`$ can be performed reproducing the exact result $`A_\mu ^{(0)}`$ $`=`$ $`18{\displaystyle \frac{M_e^2}{M_\mu ^2}}12{\displaystyle \frac{M_e^4}{M_\mu ^4}}\mathrm{ln}{\displaystyle \frac{M_e^2}{M_\mu ^2}}+8{\displaystyle \frac{M_e^6}{M_\mu ^6}}{\displaystyle \frac{M_e^8}{M_\mu ^8}}.`$ (7) We want to demonstrate the power of our method for the order $`\alpha `$ corrections where four three-loop diagrams contribute. At this order we are able to evaluate a large number of moments which gives us a suggestion how many terms are necessary at $`𝒪(\alpha ^2)`$ in order to obtain reliable results. Furthermore the computation can be performed for arbitrary gauge parameter which also provides some hints for the two-loop QED corrections. Applying the asymptotic expansion in the limit $`M_\mu ^2q^2`$ to the four three-loop diagrams contributing to the $`𝒪(\alpha )`$ correction leads to the following result for the first nine expansion terms $`A_{\mu ,exp}^{(1)}={\displaystyle \frac{11}{8}}+{\displaystyle \frac{25}{48}}\xi +z_\mu \left({\displaystyle \frac{61}{400}}{\displaystyle \frac{1}{5}}\xi \right)+z_\mu ^2\left({\displaystyle \frac{47}{540}}{\displaystyle \frac{1}{12}}\xi \right)`$ (8) $`+z_\mu ^3\left({\displaystyle \frac{6929}{132300}}{\displaystyle \frac{1}{21}}\xi \right)+z_\mu ^4\left({\displaystyle \frac{11923}{352800}}{\displaystyle \frac{1}{32}}\xi \right)+z_\mu ^5\left({\displaystyle \frac{439213}{19051200}}{\displaystyle \frac{1}{45}}\xi \right)`$ $`+z_\mu ^6\left({\displaystyle \frac{156487}{9525600}}{\displaystyle \frac{1}{60}}\xi \right)+z_\mu ^7\left({\displaystyle \frac{931367}{76839840}}{\displaystyle \frac{1}{77}}\xi \right)`$ $`+z_\mu ^8\left({\displaystyle \frac{216409}{23522400}}{\displaystyle \frac{1}{96}}\xi \right)+𝒪(z_\mu ^9).`$ In Tab. 1 results for the $`𝒪(\alpha )`$ coefficient can be found where the Padé approximation is performed in the variable $`z_\mu =q^2/M_\mu ^2`$. Furthermore the gauge parameter defined through the photon propagator $`i(g^{\mu \nu }+\xi q^\mu q^\nu /q^2)/(q^2+iϵ)`$ is varied between<sup>2</sup><sup>2</sup>2Despite the fact that for $`\xi >1`$ the generating functional is in principle not defined we decided to choose this range for the gauge parameter. $`\xi =2`$ and $`\xi =+2`$. Padé results which develop poles for $`|z_\mu |1`$ are in general represented by a dash. If an approximate cancellation with a zero from the numerator takes place (see the discussion above), they are marked by a star ($``$). The comparison with the exact result shows that for all values of $`\xi `$ reasonable agreement is found. However, there is a clear preference for $`\xi =0`$ where the agreement with the exact result is best<sup>3</sup><sup>3</sup>3It seems that the $`\xi `$-dependent terms of $`A_{\mu ,exp}^{(1)}`$ follow from the construction rule $`25/48_{n=1}^{\mathrm{}}z_\mu /n/(n+4)`$ which for $`z_\mu =1`$ indeed gives zero.. Thus we will adopt this value for the $`𝒪(\alpha ^2)`$ calculation. In Tab. 2 the gauge parameter is fixed to $`\xi =0`$ and in addition to the $`z`$-Padés also the $`\omega `$-Padés are listed. With the inclusion of more moments the approximation to the exact result improves. Taking only those results into account where eight or nine input terms enter the following result for the order $`\alpha `$ correction can be deduced $`A^{(1)}`$ $`=`$ $`1.80(1).`$ (9) Here the notation $`1.80(1)=1.80\pm 0.01`$ has been adopted. The excellent agreement with the exact result quoted in Tab. 2 encourages the use of our method. At order $`\alpha ^2`$ only the first eight moments are at hand. Using only seven and eight input terms changes the numbers of Eq. (9) to $`A^{(1)}`$ $`=`$ $`1.80(2),`$ (10) which is still sufficiently accurate. Let us now move on to the two-loop QED corrections. Altogether 44 four-loop diagrams contribute. The application of the asymptotic expansion in the limit $`M_\mu ^2q^2`$ leads to 72 sub- and cosub-diagrams, which have to be evaluated. The analytical expressions obtained from the asymptotic expansion are quite lengthy. Thus we refrain from listing the results explicitly and present them only in numerical form. Our results at $`𝒪(\alpha ^2)`$ are summarized in Tab. 3 where the scale $`\mu ^2=M_\mu ^2`$ has been adopted<sup>4</sup><sup>4</sup>4The $`\mathrm{ln}\mu ^2/M_\mu ^2`$ terms can be reconstructed with the help of the $`\beta `$ function governing the running of $`\overline{\alpha }(\mu )`$.. For comparison in the last line the numbers presented in Ref. are listed. The results one obtains using the Padé approximants computed with seven and eight input terms read $`A_{\mu ,\gamma \gamma }^{(2)}`$ $`=`$ $`3.5(4),`$ $`A_{\mu ,e}^{(2)}`$ $`=`$ $`3.2(6),`$ $`A_{\mu ,\mu }^{(2)}`$ $`=`$ $`0.0364(1).`$ (11) It is remarkable that the central values agree well with the exact results which can be interpreted as a sign that the presented error estimations are quite conservative. Furthermore it can be claimed that via our method we were able to confirm the results of . The error of $`A_{\mu ,\mu }^{(2)}`$ is particularly small, as the expansion in $`z`$ converges very quickly. Note that in this case all $`\omega `$-Padés develop poles inside the unit circle. The $`z`$-Padés are, however, very stable. A similar behaviour has been found for the analogous contribution to the decay of the top quark into a $`W`$ boson and a bottom quark . A prediction for the decay rate of the muon up to order $`\alpha ^2`$ can be in principle obtained by summing the individual terms of (11). This would, however, significantly overestimate the error. It is more promising to add in a first step the moments of the single contributions and perform the Padé procedure for the sum. The corresponding results are shown in Tab. 4 which finally lead to $`A_\mu ^{(2)}`$ $`=`$ $`6.5(7).`$ (12) The deviation of the central value from the exact result of 6.743 is less than 3% and well covered by the extracted error of roughly 10%. Thus the sole knowledge of our results would also reduce the theoretical error on $`G_F`$ as mentioned in the Introduction. Using the results presented in this paper the decay rate of the muon reads $`\mathrm{\Gamma }(\mu \nu _\mu e\overline{\nu }_e)`$ $`=`$ $`\mathrm{\Gamma }_\mu ^0\left[0.99981.810{\displaystyle \frac{\overline{\alpha }(M_\mu )}{\pi }}+6.5(7)\left({\displaystyle \frac{\overline{\alpha }(M_\mu )}{\pi }}\right)^2+\mathrm{}\right].`$ (13) As already noted in the numerical coefficient in front of the second order corrections becomes very small if one uses the on-shell scheme for the definition of the coupling constant $`\alpha `$. Then the $`\overline{\text{MS}}`$ coupling is given by $`\overline{\alpha }(M_\mu )=\alpha (1+\alpha /3\pi \mathrm{ln}(M_\mu ^2/M_e^2))`$ and there is an accidental cancellation between the constant and the logarithm in the second order corrections. Let us now turn to the semileptonic decay of the bottom quark. The Born and one-loop corrections can, of course, be taken from the muon decay. In particular we have $`A_\mu ^{(0)}=A_b^{(0)}`$ and $`A_\mu ^{(1)}=A_b^{(1)}`$. As far as the two-loop terms are concerned only the non-abelian contribution, $`A_{b,NA}^{(2)}`$, has to be computed in addition. The other colour structures are related to the expressions occurring in the muon decay rate through $`A_{b,A}^{(2)}=A_{\mu ,\gamma \gamma }^{(2)}`$, $`A_{b,l}^{(2)}=A_{\mu ,e}^{(2)}`$, and $`A_{b,F}^{(2)}=A_{\mu ,\mu }^{(2)}`$, with obvious replacements of the masses. In Tab. 5 the results for $`A_{b,NA}^{(2)}`$ can be found. We infer $`A_{b,NA}^{(2)}`$ $`=`$ $`8.8(4),`$ (14) where $`\mu ^2=M_b^2`$ has been chosen. The central value is again in very good agreement with the exact result and agrees within the error estimate of 5%. In order to get predictions for $`A_b^{(2)}`$ we again add the moments in a first step and perform the Padé approximations afterwards. From the results listed in Tab. 6 we deduce $`A_b^{(2)}`$ $`=`$ $`21.1(6).`$ (15) This number is in good agreement with the one stated in . The error is quite small and amounts only to roughly 3%. The semileptonic decay rate of the bottom quark finally reads $`\mathrm{\Gamma }(bue\overline{\nu }_e)`$ $`=`$ $`\mathrm{\Gamma }_b^0\left[12.413{\displaystyle \frac{\alpha _s(M_b)}{\pi }}21.1(6)\left({\displaystyle \frac{\alpha _s(M_b)}{\pi }}\right)^2+\mathrm{}\right].`$ (16) ## 4 Conclusions In this paper the two-loop QED corrections to the decay rate of the muon have been evaluated. A new method has been used in order to confirm via an independent calculation the result of Ref. . From the muon decay the Fermi coupling constant, $`G_F`$, is determined which constitutes one of the basic input parameters of the Standard Model. Thus it is very important to have independent checks on such highly non-trivial computations. The inclusion of the new correction terms removes the theoretical error on $`G_F`$. After the additional computation of the non-abelian diagrams the QCD corrections to the semileptonic decay rate of the bottom quark $`\mathrm{\Gamma }(bue\nu _e)`$ are obtained. Again agreement with the literature is found. ## Acknowledgments We would like to thank K.G. Chetyrkin and J.H. Kühn for valuable discussions and careful reading of the manuscript. Useful discussions with T. van Ritbergen are gratefully acknowledged. This work was supported by the Graduiertenkolleg “Elementarteilchenphysik an Beschleunigern”, the DFG-Forschergruppe “Quantenfeldtheorie, Computeralgebra und Monte-Carlo-Simulationen” under Contract Ku 502/8-1 and the Schweizer Nationalfonds.
no-problem/9909/gr-qc9909008.html
ar5iv
text
# A quantum holographic principle from decoherence ## I INTRODUCTION It was first propounded by ’t Hooft that the observable degrees of freedom of a $`3+1`$ diemsnional world can as well be realised from the boundary of the system. This idea fits in perfectly with Bekenstein’s result about the total entropy of a black-hole not exceeding a quarter of the area of its event horizon . Following ’t Hooft’s, this suggests that all the information of a black hole can be collected from the surface of its horizon. There also exists a conjecture in supergravity Yang-Mills theories that the bulk information can be successfully stored at the boundary of an anti-de-Sitter space . All these results lead to the conjecture : all information about a system can be obtained from its boundary, which is known as the holographic principle. However, the application of holographic principle, as it stands, becomes restricted to situations where one can uniquely define the boundary of a system. What could be the boundary of a purely quantum mechanical system? One could imagine it to be the boundary of the classical potential which traps the system. From such considerations, for a particle in a box, the box would be its boundary. However, a quantum system can always tunnel out from any finite classical potential. Moreover, in reality, classical ptential wells are the results of the interaction of a quantum system with other quantum fields and the notion of a classical potential well does not even exist in an entirely quantum universe. This motivates us to formulate a version of the holographic principle entirely in terms of quantum systems and their interactions. To prove this version of the principle, we rely on the notion of environment induced decoherence (EID) . EID is a process used to explain the emergence of classicality from the quantum world. When an isolated quantum system interacts with it’s environment, its state evolves to a diagonal mixed states in a specific pointer basis . This is also a mechanism for entropy production in an open quantum system. We show that if this is the sole entropy generation mechanism in a system, then a quantum version of the holographic principle can be proved. In other words, we assume that all the entropy of a quantum system arises from its interactions with its environment. This assumption is justified if all systems in our universe are quantum. Their entropy cannot increase due to their own unitary evolutions. They gain entropy only when they interact with other systems. We will first give an operational definition of the boundary of a quantum system in terms of the notions of systems, sub-systems and their interactions. Next we will show how the decoherence of the system leads to all information about the system being stored in its boundary ( as defined by us ). After this, we will use the stability of the states of the pointer basis in decoherence to justify why the information remains stored in this way. To conclude the paper, we will point out the conditions required for our quantum holographic principle to go over to the standard ( semi-classical) holographic principle. ## II THE QUANTUM HOLOGRAPHIC PRINCIPLE Let there be a quantum system $`\mathrm{S}`$. All quantum systems interacting with it together constitute another system, called the environment $`\mathrm{E}`$. However, in general, all constituents of the system $`\mathrm{S}`$ will not interact with $`\mathrm{E}`$ directly. Therefore, we divide the system $`\mathrm{S}`$ into two subsystems $`\mathrm{A}`$ and $`\mathrm{B}`$ with $`\mathrm{B}`$ being the subsystem which directly interacts with $`\mathrm{E}`$. We define $`\mathrm{B}`$ to be the boundary of the system $`\mathrm{S}`$. Using this definition of the boundary, we can formulate the following holographic principle : all information about $`\mathrm{S}`$ can be obtained from $`\mathrm{B}`$. As we have assumed that all the entropy of the system is due to its interaction with its environment, we can start off the system $`S`$, before any interaction with $`\mathrm{E}`$, in the pure state $`|\psi _{\mathrm{AB}}`$. Let us assume that $`\widehat{X}_\mathrm{B}`$ is the operator whose eigenstates $`|X_i_\mathrm{B}`$ form the pointer basis for the subsystem $`\mathrm{B}`$. This means, that $`\mathrm{B}`$ interacts with $`\mathrm{E}`$ via an interaction Hamiltonian of the type $`H_{\mathrm{BE}}=g_1\widehat{X}_\mathrm{B}\widehat{Y}_\mathrm{E},`$ (1) where $`\widehat{Y}_\mathrm{E}`$ is some operator in the Hilbert space of the system $`\mathrm{E}`$ and $`g_1`$ is the coupling strength. If $`\mathrm{E}`$ is a sufficiently large system, and if $`\mathrm{B}`$ has many degrees of freedom interacting with $`\mathrm{E}`$, then this interaction leads to a diagonalization of the state $`\mathrm{B}`$ in the basis $`|X_i`$. The initial pure state of the system $`\mathrm{S}`$ can be written in this basis as $`|\psi _{\mathrm{AB}}={\displaystyle \underset{i}{}}c_i|\varphi _i_\mathrm{A}|X_i_\mathrm{B},`$ (2) where the state $`|\varphi _i_\mathrm{A}`$ need not be orthogonal, but are assumed to be normalized. After interaction of $`\mathrm{B}`$ with $`\mathrm{E}`$ for a span of time more than the decoherence time scale this state evolves to $`\rho _{\mathrm{AB}}={\displaystyle \underset{i}{}}|c_i|^2|\varphi _i_\mathrm{A}\varphi _i|_\mathrm{A}|X_i_\mathrm{B}X_i|_\mathrm{B}.`$ (3) At this stage the von-Neumann entropy $`S^v(\rho _{\mathrm{AB}})`$ of the system $`S`$ is given by $`S^v(\rho _{\mathrm{AB}})=\mathrm{Tr}(\rho _{\mathrm{AB}}\mathrm{ln}\rho _{\mathrm{AB}})={\displaystyle \underset{i}{}}|c_i|^2\mathrm{ln}|c_i|^2,`$ (4) The Von-Newmann entropy of the boundary $`B`$ is obtained from $`\rho _\mathrm{B}=\mathrm{Tr}_\mathrm{A}(\rho _{\mathrm{AB}})`$ to be $`S^v(\rho _\mathrm{B})=\mathrm{Tr}(\rho _\mathrm{B}\mathrm{ln}\rho _\mathrm{B})={\displaystyle \underset{i}{}}|c_i|^2\mathrm{ln}|c_i|^2=S^v(\rho _{\mathrm{AB}}).`$ (5) It is known that the von-Neumann entropy of a system is equal to the amount of information one can acquire on observing the system’s state . Eq.(5) shows that the entire information $`S^v(\rho _{\mathrm{AB}})`$ about the system $`\mathrm{S}`$ can be gained from just acquiring the information $`S^v(\rho _\mathrm{B})`$ stored on it’s boundary $`\mathrm{B}`$. However, how stable is this information stored in the boundary? In fact, what about the posssibility of Eq.(5) loosing it’s validity, due to the interaction between subsystems $`\mathrm{A}`$ and $`\mathrm{B}`$ of the system $`\mathrm{S}`$? This possibility can be prevented if $`\mathrm{B}`$ is a subsystem with a large number of degrees of freedom (i.e a macroscopic system). Then by definition of the pointer basis, the states of the basis $`|X_i`$ are stable states. In fact, it is the stability of states in the pointer basis , which makes them ideal candidates for classical states. This demands that the subsystem $`\mathrm{B}`$ interacts with the subsystem $`\mathrm{A}`$ via an interaction Hamiltonian of the type $`H_{\mathrm{AB}}=g_2\widehat{X}_\mathrm{B}\widehat{Z}_\mathrm{A},`$ (6) where $`\widehat{Z}_\mathrm{A}`$ is an operator in the Hilbert space of $`\mathrm{A}`$ and $`g_2`$ is the coupling strength. In Eq.(6), $`\widehat{X}_\mathrm{B}`$ can be replaced by any operator that commutes with $`\widehat{X}_\mathrm{B}`$. As long as $`H_{\mathrm{AB}}`$ is of the form described by Eq.(6), the state of the system $`\mathrm{S}`$ will evolve only to states of the form $`\rho _{\mathrm{AB}}^{}={\displaystyle \underset{i}{}}|c_i|^2|\varphi _i^{}_\mathrm{A}\varphi _i^{}|_\mathrm{A}|X_i_\mathrm{B}X_i|_\mathrm{B},`$ (7) where $`|\varphi _i^{}_\mathrm{A}=e^{ig_2\widehat{X}_i\widehat{Z}_\mathrm{A}t}|\varphi _i_\mathrm{A}`$ ( in which $`t`$ denotes the time ). Thus Eq.(5) continues to be satisfied, and the entire information about the system $`\mathrm{S}`$ can be learnt from its boundary $`\mathrm{B}`$. In the above proof of the quantum holographic principle, the facts that $`\mathrm{B}`$ was macroscopic enough to decohere, and that it was coupled to both $`\mathrm{A}`$ and $`\mathrm{E}`$ by the same operator $`\widehat{X}_\mathrm{B}`$, were important. We can give a counter example of the quatum holographic principle when the above conditions do not hold. Let $`\mathrm{S}`$ be a very simple system comprised of two spin $`\frac{1}{2}`$ particles $`\mathrm{A}`$ and $`\mathrm{B}`$. Only $`\mathrm{B}`$ directly interacts with another spin $`\frac{1}{2}`$ particle $`\mathrm{E}`$, and as such, is the boundary of $`\mathrm{S}`$. We take an initial state of $`\mathrm{A}`$, $`\mathrm{B}`$ and $`\mathrm{E}`$ to be the following form $`(|_\mathrm{A}|_\mathrm{B}+|_\mathrm{A}|_\mathrm{B})|_\mathrm{E}.`$ (8) At this stage, the system $`\mathrm{S}`$ has a zero entropy and so does the environment $`\mathrm{E}`$. Let the unitary interaction $`U_{\mathrm{BE}}`$ between $`B`$ and $`E`$ be the following $`|_\mathrm{B}|_\mathrm{E}|_\mathrm{B}|_\mathrm{E},`$ (9) $`|_\mathrm{B}|_\mathrm{E}|_\mathrm{B}|_\mathrm{E}.`$ (10) Then the final state of $`A`$, $`B`$ and $`E`$ is as follows $`(|_\mathrm{A}|_\mathrm{E}+|_\mathrm{A}|_\mathrm{E})|_\mathrm{B}.`$ (11) All entropy of the system $`\mathrm{S}`$ is stored in $`A`$. The boundary $`\mathrm{B}`$ has no entropy. This counter example illustrates the importance of the Hamiltonians $`H_{\mathrm{BE}}`$ and $`H_{\mathrm{AB}}`$ being of the form given by Eqs.(1) and (6) for the quantum holographic principle to hold true. ## III CONCLUSION We have shown that under a specific set of conditions, the entire information about a quantum system $`\mathrm{S}`$ can be obtained from it’s subsystem $`\mathrm{B}`$ which directly interacts with the environment. In the language of quantum measurement theory, the boundary $`\mathrm{B}`$ acts as an apparatus for the state of the whole system $`\mathrm{S}`$. While the standard (semi-classical) holographic principle helps in making the $`3`$ dimensional world effectively $`2`$ dimensional, the quantum version could lead to a reduction of the Hilbert space dimensions of a problem. For the quantum version of the principle to reduce to the semi-classical version, we require: $`(1)`$ The physical boundary of the system in the semi-classical version to coincide with the subsystem $`\mathrm{B}`$. In other words, we require the boundary to be interacting strongly with the enviornment and the bulk to have insignificant interaction with the outside world. $`(2)`$ The boundary itself to have quite a large number of degrees of freedom, so that it is macroscopic and strongly decohering (i.e nearly a classical system). It must also be coupled both to it’s enviornment and to it’s bulk by the same operator $`\widehat{X}_\mathrm{B}`$. $`(3)`$ The system to be an open system. So it would not apply to closed systems such as the entire universe. To conclude, we emphasize that there may be other methods to derive the holographic principle which apply to less restricted situations. But if we assume that all systems are essentially quantum, and gain entropy only from interaction with their environment, then the requirements $`(1)(3)`$ are probably essential for the validity of the standard holographic principle. ###### Acknowledgements. Authors are supported by the INLAKS foundation and the ORS award.
no-problem/9909/cond-mat9909273.html
ar5iv
text
# Universal Scaling of Wave Propagation Failure in Arrays of Coupled Nonlinear Cells \[ ## Abstract We study the onset of the propagation failure of wave fronts in systems of coupled cells. We introduce a new method to analyze the scaling of the critical external field at which fronts cease to propagate, as a function of intercellular coupling. We find the universal scaling of the field throughout the range of couplings, and show that the field becomes exponentially small for large couplings. Our method is generic and applicable to a wide class of cellular dynamics in chemical, biological, and engineering systems. We confirm our results by direct numerical simulations. \] The impact of discreteness on the propagation of phase fronts in biophysical, chemical, and engineering systems has been intensively studied during the last decade. Among the diverse examples are calcium release waves in living cells , reaction fronts in chains of coupled chemical reactors , arrays of coupled diode resonators , and discontinuous propagation of action potential in cardiac tissue . All these disparate systems share a common phenomenon of wave front propagation failure, independently of specific details of each system. Recently this effect has drawn considerable attention (see, e.g., Refs. ). Numerous experimental evidences show that the propagation failure occurs at finite values of the coupling strength (a critical coupling. This is contrary to continuous systems, where wave fronts propagate for arbitrary couplings . A challenging problem is to establish the universal properties of the critical coupling; this is crucial for making predictions of qualitatively different regimes of system dynamics. In this Letter we consider the universal behavior of phase separation fronts in one-dimensional nonlinear discrete systems in an external field. We study the propagation failure transition for a class of simple dynamical models describing experimental observations in arrays of coupled nonlinear cells, such as chains of bistable chemical reactors , systems of cardiac cells , etc. A new analytical method is presented to study generic properties of the critical external field of the transition. We find, using this method, how the critical field scales with the intrachain coupling. This method is applicable for a wide range of the couplings. We confirm our analytical predictions by direct numerical simulations of the full system. Our model in general is given by the following set of coupled nonlinear equations: $$\gamma \frac{du_n}{dt}=C(u_{n+1}+u_{n1}2u_n)\frac{G(u_n,E)}{u_n}.$$ (1) Here $`u_n`$ is the order parameter at the $`n`$-th site, $`\gamma `$ is the damping coefficient, $`C`$ is the coupling constant, and $`G(u_n,E)`$ is the onsite potential, where $`E`$ is the applied field. The potential has at least two minima separated by a barrier $`u_B`$. The external field $`E`$ is responsible for the energy difference between the minima. This provides for one globally stable and one metastable minimum, $`u=u_+`$ and $`u=u_{}`$, respectively. Phase fronts connect the two minima and tend to propagate, to increase the size of the energetically more favorable phase, $`u_+`$. The mechanism of front propagation is manifested in the competition between the system discreteness and the driving field $`E`$. This competition gives rise to the propagation failure at the critical field $`E_c`$, which depends upon the intercellular coupling . For $`E>E_c`$, the front propagates at the velocity $`V`$ (see, e.g., Ref. ) vanishing at the transition point, $`E=E_c`$. The results presented in this Letter are generic and apply to a wide class of systems, independently of the details, as long as the potential $`G(u,E)`$ has the bistable structure described above . We start our analysis with the sine-Gordon potential, $`G=K(1\mathrm{cos}uEu)`$, with the factor $`K`$ being the potential amplitude. This potential was chosen in order to address the systems with phase dynamics, where the order parameter possesses a natural periodicity, such as arrays of Josephson junctions . The dimensionless dynamics in this case are given by $$\frac{du_n}{dt}=\beta (u_{n+1}+u_{n1}2u_n)+E\mathrm{sin}u_n,$$ (2) where time is rescaled as $`ttK/\gamma `$ and the dimensionless coupling is $`\beta =C/K`$. Our consideration is focused on the elementary fronts connecting two nearest minima out of the infinite set of the potential minima. The system is invariant with respect to the shift $`uu+2\pi `$, so we choose without loss of generality $`u_{}=\mathrm{arcsin}E,u_+=u_{}+2\pi ,u_B=\pi \mathrm{arcsin}E`$. This makes the dynamics effectively bistable. We introduce a novel analysis to find the universal dependence of the critical field $`E_c`$ on the coupling $`\beta `$. For not too large $`\beta `$, the results are in a good quantitative agreement with the corresponding asymptotic description. Finally we address a bistable fourth degree polynomial potential with the corresponding force in Eq. (1), $`G/u=u(u1)(u1/2+E)`$. Such a potential is applied, e.g., to describe the propagation failure in arrays of chemical reactors and coupled diode resonators . This illustrates the applicability of our approach to generic potentials. The obtained analytical results for both potentials are justified by the numerical simulations of the full systems. For large dimensionless coupling, $`\beta 1`$, Eqs. (2) approach the continuous regime described by the overdamped sine-Gordon equation, $`u_t=\beta u_{xx}\mathrm{sin}u+E`$. Here $`x`$ is a spatial coordinate standing for the continuous site number. This equation possesses a front solution, which propagates at nonzero velocity, for any driving field $`E`$. For small field, the front has the form $`u4\mathrm{arctan}\left[\mathrm{exp}(z/\sqrt{\beta })\right]`$, with the traveling wave coordinate $`z=xVt`$ and velocity $`VE\sqrt{\beta }`$ . Therefore, the critical field $`E_c`$ vanishes in the continuous regime. For finite $`\beta `$, however, the external driving can be balanced by the effect of discreteness, thus leading to a propagation failure at a finite value of $`E=E_c(\beta )`$. Note that for a wide class of bistable systems, there is a global instability at $`E=E_{gl}`$, above which the potential minima $`u_\pm `$ (or one of them) cease to exist. Typically the propagation failure occurs for $`E_c<E_{gl}`$. In particular, for Eqs. (2) one has $`E_{gl}=1`$. In order to study the onset of the propagation failure, one has to consider the stationary case of Eqs. (2), i.e., $`u_n/t=0`$ (Fig 1). We denote the site closest to the barrier separating two potential minima, as the “front site” and assign to it number $`n=0`$. Before developing the main approach of the present study, we briefly present our results obtained by means of single-active-site theory, valid for the discrete regime of not too large coupling $`\beta `$. This theory is based on the fact that, for such $`\beta `$, only the front site, $`n=0`$, experiences nonlinearity, while all the other sites are close enough to either $`u_+`$ or $`u_{}`$ to be in a linear regime, see Fig 1. Solving the linearized version of Eqs. (2) in the stationary case, one obtains $`u_n=\mathrm{\Delta }+A\mathrm{exp}(\lambda n)`$ and $`u_n=2\pi +\mathrm{\Delta }+B\mathrm{exp}(\lambda n)`$, for $`n\mathrm{\hspace{0.33em}0}`$ and $`n\mathrm{\hspace{0.33em}0}`$, respectively. Here $`\mathrm{\Delta }=\mathrm{arcsin}E`$, $`\lambda =\text{arccosh}[\mathrm{\hspace{0.17em}1}/(2\beta )+\mathrm{cos}\mathrm{\Delta }]`$. Matching these at $`n=0`$ leads to $`A=2\pi +B`$, which after substitution in the full nonlinear equation for $`u_0`$, yields $$2\beta (\pi +B)(e^\lambda 1)+\mathrm{sin}(B+\mathrm{\Delta })=E.$$ (3) For $`E<E_c`$, Eq. (3) possesses two solutions, corresponding to stable and unstable fronts in the original problem (2). At the bifurcation point $`E=E_c`$ these solutions merge and disappear, so that for $`E>E_c`$ no stationary solution to Eqs. (2) exists, and the front propagates. This implies that at the bifurcation point the derivative of the left hand side of Eq. (3) with respect to $`B`$ must equal zero. Then we obtain, after some calculations, an approximate expression for $`E_c`$: $`E_c`$ $`{\displaystyle \frac{\sqrt{2\sqrt{1+4\beta }4\beta 1}f(\beta )\mathrm{arccos}[f(\beta )]}{1+f(\beta )}},`$ (5) $`\text{where}f(\beta )=2\left[1{\displaystyle \frac{\beta +1}{1+2\beta +\sqrt{1+4\beta }}}\right].`$ The graph of $`E_c(\beta )`$ given by (5) is shown in Fig. 2 (dotted line). We see from the figure that it agrees well with the results of numerical simulations of the full system (2), for small to moderate values of $`\beta `$ ($`\beta 0.5`$). This makes the single-active-site theory substantially more useful than a regular small $`\beta `$ perturbation theory, which works only for much smaller $`\beta `$ ($`0.1`$). Although the single-active-site theory gives reasonable predictions for the propagation failure transition for moderate values of the intrachain coupling $`\beta `$, it does not provide a universal scaling of the critical field $`E_c`$ with $`\beta `$ (see Fig. 2). This motivates developing a general theory to describe the phenomenon of the propagation failure throughout the range of $`\beta `$. Such a theory is presented below. A stationary front in Eq. (2) can be obtained as an extremum of the free energy $$=\underset{n}{}[\frac{\beta }{2}(u_{n+1}u_n)^2+1\mathrm{cos}u_nEu_n].$$ (6) For the case of zero field $`E`$, the front is always stationary, and no transition exists. In this case we have for the front, taking into account the effect of discreteness (see also ) $`u_n=w(n){\displaystyle \frac{\mathrm{sin}w(n)}{12\beta }},`$ (7) $`w(x)=2\mathrm{arctan}\left[b\mathrm{sinh}^1\left({\displaystyle \frac{bx}{\sqrt{\beta }}}\right)\right],x0,`$ (8) $`w(x)=2\pi w(x),x0,`$ (9) with the coefficient $`b=[\mathrm{\hspace{0.17em}1}1/(12\beta )]^{1/2}`$. At zero field $`E`$, the free energy $``$ (6) possesses an infinite set of minima of equal depths, separated by barriers. Each of these minima corresponds to a stable stationary front and differs from the other fronts by a shift on an integer number of sites. Each barrier corresponds to an unstable front. When the external field $`E`$ is applied, the set of minima is tilted, so that the difference between their depths is determined by the field value. As the field increases, each of the minima approaches its adjacent barrier (down the energy landscape), until they finally merge, at $`E=E_c`$. For $`E>E_c`$, the energy has no extrema left, and therefore the fronts propagate without limit. Our method is based on choosing profiles $`\{u_n\}`$ that can be parameterized by an effective coordinate $`\alpha `$, and replacing the argument $`n`$ of function $`w(n)`$ in (7) with $`n+\alpha `$. We show below that this choice allows us to find the critical field $`E_c`$. It can be demonstrated that, for $`E=0`$, the points $`\alpha =1/2+m`$ ($`m`$ is an integer) correspond to stable fronts, and $`\alpha =m`$, to unstable fronts. The former and the latter are energy minima and maxima, respectively, satisfying $`d/d\alpha =0`$. Note that the chosen parameterization by $`\alpha `$ may be interpreted as a continuous shift of a profile $`\{u_n\}`$. Our goal now is to evaluate the free energy landscape $`(\alpha )`$ given by (6), for nonzero field $`E`$. When $`E=E_c`$ the minima of $`(\alpha )`$ merge with their adjacent maxima at the inflection points $`d^2/d\alpha ^2=0`$. We have substituted the fronts (7) with $`w(n+\alpha )`$ into energy (6) and found $`E_c`$ for various $`\beta `$’s, using Mathematica software. The result is shown in Fig. 2 (dashed line). In order to find the analytical form of $`(\alpha )`$, we use the Poisson summation formula for infinite series: $$\underset{\mathrm{}}{\overset{\mathrm{}}{}}F(n)=_{\mathrm{}}^{\mathrm{}}F(x)\left[\mathrm{\hspace{0.17em}1}+2\underset{k=1}{\overset{\mathrm{}}{}}\mathrm{cos}(2\pi kx)\right]𝑑x,$$ (10) where $`F(n)`$ is the $`n`$th element of the series in (6). One finds that the integral $`F(x)𝑑xE\alpha `$, up to a term independent of $`\alpha `$. Then, after closing the integration contour in the complex plane, we see that $`k>1`$ terms in (10) are exponentially small, compared to the $`k=1`$ term. This dominant term can be evaluated approximately to give the following expression for $`(\alpha )`$ $$(\alpha )=2\pi E\alpha +\mathrm{\Omega }\beta \mathrm{cos}(2\pi \alpha )\mathrm{exp}\left[\frac{\pi ^2\beta }{\sqrt{\beta 1/12}}\right].$$ (11) The factor $`\mathrm{\Omega }`$ is a constant and can be found by comparison of (11) with the results obtained by Mathematica (see Fig. 2), which yields $`\mathrm{\Omega }340`$. Then, using the conditions $`d/d\alpha =d^2/d\alpha ^2=0`$, we find $$\alpha =\frac{1}{4},E_c=\mathrm{\Omega }\beta \mathrm{exp}\left[\frac{\pi ^2\beta }{\sqrt{\beta 1/12}}\right].$$ (12) The general scaling $`E_c(\beta )`$ of the propagation failure transition given by (12), virtually coincides with the one obtained by Mathematica, shown in Fig. 2 . In the continuous regime of large $`\beta `$ the result (12) has the following simple asymptotic form $`E_c=\mathrm{\Omega }\beta \mathrm{exp}[\pi ^2\sqrt{\beta }]`$. This implies that the critical field $`E_c`$ decays exponentially for large couplings $`\beta `$. To confirm our general theory, we have performed numerical simulations of the full system (2). We have used the implicit second order integration method, in order to obtain the transition line $`E_c(\beta )`$. The results are presented in Fig. 2 (solid line). We see in the figure a good quantitative agreement of our analytical predictions with the results of the simulations. To address the propagation failure in arrays of chemical reactors and diode resonators , we turn to a fourth degree polynomial potential leading to a cubic force in Eqs. (1): $`G/u=u(u1)(u1/2+E)`$. The globally stable and metastable minima for this potential are $`u_+=1`$ and $`u_{}=0`$, respectively; the barrier $`u_B=1/2E`$. Applying our general theory developed above, we find the propagation failure transition line $`E_c(\beta )`$. In Fig. 3 we show the results obtained by Mathematica (dashed line). We see again that the theory is confirmed by numerical simulations of the full system (solid line). The analytic expression of $`E_c(\beta )`$ analogous to (12) is cumbersome , so in this Letter we only give its asymptotic form for large $`\beta `$: $`E_c=\mathrm{\Omega }\beta \mathrm{exp}(\eta \sqrt{\beta })`$. This form has the same structure as the asymptotics of critical curve (12) for sine-Gordon potential, which provides a persuasive argument that the proposed approach is generic, independent of the specific potential. The values of $`\mathrm{\Omega }`$ and $`\eta `$ are found to be $`\mathrm{\Omega }429,\eta 21.9`$. To conclude, we have introduced a general method to analyze the transition of front propagation failure in arrays of coupled nonlinear cells. Using this method we have determined, for bistable dynamics of the cells, how the critical external field of the transition scales with intrachain coupling. To demonstrate the generality of the new method, we have carried out our analysis for two different effectively bistable potentials: (i) the sine-Gordon potential, useful e.g., for describing arrays of Josephson junctions ; and (ii) a fourth degree polynomial potential applicable to chains of chemical reactors and coupled diode resonators . Our theoretical predictions have been confirmed by numerical simulations of the full systems. After this study of the propagation failure problem in the one-component systems (1), the following question arises: what is the mechanism of this phenomenon in systems described by more complicated dynamics? For example, the “single-pool-model” for intracellular calcium waves is represented by two dynamic components: the calcium concentration and the fraction of vacant ionic channels. An important and virtually unexplored issue is how the effect of discreteness of ionic channels determines the onset of the propagation failure of calcium waves. We expect that our techniques can provide new insights into this and other types of more complicated chains of coupled nonlinear elements in biophysics, chemistry, and engineering. A further challenging direction of study here is to extend our analysis of the propagation failure to two- and three-dimensional systems. This would allow one to understand the effect of discreteness on the dynamics and stability of such structures as spiral and scroll waves in cardiac tissue . We thank John Pearson for fruitful discussions and valuable advice. This work was supported by the Department of Energy under contract W-7405-ENG-36.
no-problem/9909/astro-ph9909089.html
ar5iv
text
# Implications of an Obscured AGN Model for the X-ray Background at Sub-mm and Far Infra-Red Wavelengths ## 1 Introduction Recently, enormous advances have been made in the field of sub-mm astronomy, most notably with the commissioning of SCUBA, the Sub-mm Common User Bolometer Array (\[Gear & Cunningham\]; \[Holland et al.\]), on the James Clark Maxwell Telescope (JCMT). The $`850\mu \mathrm{m}`$ surveys of Smail, Ivison & Blain (1997), \[Hughes et al.\] (1998) and \[Barger et al.\] (1998) suggest that the sub-mm counts show strong evidence of evolution. The sources detected at this wavelength appear to be dusty, high redshift galaxies. However, the origin of the emission is not clear, and these distant dusty galaxies could be powered by either starburst or AGN activity. The question of which component dominates is highly important: if starbursts dominate, then it would imply that a significant amount of star-formation has been missed by studies which use ultra-violet techniques to claim that the star-formation rate (SFR) decreases in galaxies at high redshift \[Madau et al.\]. \[Metcalfe et al.\] (1996) have previously suggested that models which include dust with a monotonically increasing SFR towards higher redshift can also give good fits to faint galaxy counts and colours (see also \[Wang\]; \[Gronwall & Koo\]; \[Campos & Shanks\]; \[Steidel et al.\]). It has also been known for some time that some ultraluminous infra-red galaxies (ULIRGs), which are strongly star-forming, dusty galaxies, also contain buried QSO nuclei. Images of ULIRGs in polarized light have shown highly anisotropic structure, such as that observed in the IRAS galaxy, F10214+4724, \[Lawrence et al.\]. Similar polarization structure is seen in Seyfert 2s and high redshift radio galaxies, which is thought to be indicative of non-uniform illumination consistent with the Unified Model of AGN. It has been inferred that the QSO and ULIRG phenomena are closely related, with ULIRGs being postulated as being Type 2 objects, or “QSO-2s” \[Hines\]. Genzel et al. (1998) have made ISO observations of ULIRGs in order to determine which is the dominant emission mechanism at FIR wavelengths. They find that $`75`$ per cent are powered by star-formation, while the remaining $`25`$ per cent are AGN (see also Rigopoulou, Lawrence & Rowan-Robinson 1996). A massive dusty torus will also contain huge amounts of molecular gas, and therefore by definition will also be a prime site for star-formation. It has become increasingly evident that both AGN and starburst activity are often present in the same object, and therefore it is naïve to assume that the two processes are either independent or mutually exclusive. The first spectroscopic follow-up of the faint sub-mm sources tends to show a mixture of dusty starburst and AGN. However, in obscured sources, it is frequently difficult spectroscopically to disentangle starburst from AGN components. Thus, in a SCUBA survey of four distant clusters of galaxies (Smail, Ivison & Blain 1997), the brightest source, SMM 02399-0136, was found to be a hyperluminous, active galaxy at redshift $`z=2.8`$ \[Ivison et al.\]. \[Frayer et al.\] (1998) compare the measured $`L_{\mathrm{FIR}}/L_{\mathrm{CO}}^{}`$ ratio of this high redshift galaxy, SMM 02399-0136 \[Ivison et al.\], to that of the local starburst, Arp 220, and find that it is twice as high. Since the FIR emission from Arp 220 is purely from merger-induced star-formation, they infer that approximately half of the FIR emission from SMM 02399-0136 must therefore be due to a dust-enshrouded AGN. Further examples include the ultraluminous BALQSO, APM 08279+5255, which at $`z3.9`$ is apparently the most luminous object currently known \[Lewis et al.\], and the sub-mm source B 1933+503, thought to be a high redshift ($`z>2`$) dusty radio quasar \[Chapman et al.\]. Such surveys suggest that starburst galaxies dominate over AGN at faint sub-mm fluxes, but the area of sky and number of sources detected is still quite small. Of course, large populations of such dust-enshrouded QSOs and AGN have been hypothesized in order to explain the observed shape of the X-ray background (XRB) spectrum (Madau, Ghisellini & Fabian 1994; \[Comastri et al.\]; \[Gunn & Shanks\]). In addition, the hard X-ray source counts are considerably higher than at soft X-ray energies \[Georgantopoulos et al.\], implying the presence of an additional hard-spectrum population to the broad-line QSOs known to make up the bulk of the sources at lower energies. However, one of the central features of obscured QSO models is the presence of significant quantities of gas and dust surrounding the active nucleus, providing the means of absorbing a large fraction of the intrinsic radiation and affecting the observed properties in the optical and X-ray regimes. But this energy must escape somewhere, and through heating of the dust, the absorbed flux is re-radiated at far infra-red (FIR) wavelengths, providing an important test of the models. The amounts of dust invoked are quite considerable, and combined with the huge quantity of energy that must be radiated, plus the wide redshift distribution of these sources, would imply a substantial contribution to the FIR/sub-mm background radiation and source counts. This motivates our investigation here into the impact of obscured QSO models, used initially to explain observations at X-ray energies, at much longer wavelengths. While this paper was in preparation, Almaini, Lawrence & Boyle (1999) have published sub-mm count predictions for obscured QSOs which give similar results to those described here. However, the conclusions of these authors are based on a simpler model where both the fit of the obscured QSOs to the XRB and the obscured QSO sub-mm spectrum have to be assumed. We believe that the sub-mm predictions presented here are more robust since they are based on models which have been shown to give good fits to the X-ray background over its full energy range. In addition, in our method the absorption of the X-ray and optical spectra to produce the sub-mm spectra leaves no question as to their self-consistency. In Section 2, we describe the latest measurements of the spectrum and intensity of the far infra-red background, and recent observations of the sub-mm source counts. Section 3 discusses how our obscured AGN model can be extended to these longer wavelengths, and describes how the emission from the obscuring dust torus has been modelled. The results are presented in Section 4, and are compared with the observed sub-mm number counts and the intensity of the far infra-red background at these wavelengths. We discuss the implications of our results in Section 5, and make predictions for the number-redshift relation for future $`850\mu \mathrm{m}`$ surveys in Section 6. Some of these results have already been discussed by \[Gunn\] (1999). ## 2 Far Infra-red and Sub-mm Observations It is extremely difficult to measure the extragalactic far infra-red background (FIRB), due to the presence of foreground components from interplanetary zodiacal dust emission (peaking at $`20\mu \mathrm{m}`$) and interstellar dust emission from our Galaxy (peaking at $`150\mu \mathrm{m}`$). This zodiacal and galactic contamination must be carefully modelled, and subtracted to leave the extragalactic background. In addition, the cosmic microwave background (CMB) also contributes at these wavelengths, peaking at $`1\mathrm{mm}`$, and this must also be accounted for. Once these components have been accurately modelled and removed, it should then be possible to detect any extragalactic FIRB. The first detection of the FIRB was claimed by \[Puget et al.\] (1996), who used data from FIRAS on board COBE, taking advantage of the FIR window from $`200800\mu \mathrm{m}`$, between the peaks of the zodiacal emission and the CMB, and found that the intensity of the extragalactic background has the form: $$\nu I_\nu 3.4\left(\frac{\lambda }{400\mu \mathrm{m}}\right)^3\times 10^9\mathrm{W}\mathrm{m}^2\mathrm{sr}^1,$$ in the range $`400\mu \mathrm{m}<\lambda <1000\mu \mathrm{m}`$. More recent measurements have been made at shorter wavelengths using DIRBE, also on board COBE (Schlegel, Finkbeiner & Davis 1998; \[Hauser et al.\]; \[Fixsen et al.\]). At longer wavelengths, there are several surveys currently in progress with the aim of resolving the source populations contributing to the sub-mm background. These surveys take advantage of the increased sensitivity and resolution now available with the SCUBA camera on the JCMT. SCUBA sub-mm surveys can be divided into two types, the first being pointed observations of blank fields (\[Hughes et al.\] 1998; \[Barger et al.\]; \[Eales et al.\]). The second strategy is to make pointed observations of clusters of galaxies, in order to take advantage of the gravitational amplification due to the lensing mass of the cluster (Smail, Ivison & Blain 1997). In this way, sources can be detected that would be fainter than the flux limit possible without the amplification factor, and in addition, due to the magnification of the source plane, there are fewer problems with source confusion. The $`850\mu \mathrm{m}`$ number counts currently published can be fitted with a power-law of $`N(>S)=7.9\times 10^3S^{1.1}`$, where $`N`$ is the number of sources per square degree detected above a flux limit of $`S\mathrm{mJy}`$ \[Smail et al.\]. These recent measurements of both the spectrum of the far infra-red background, and the sub-mm number counts, now provide strict constraints with which to test theories of galaxy evolution, star-formation history, etc. In the SCUBA Lens Survey \[Smail et al.\] sample of sub-mm sources, at least 20 per cent were found to show evidence for AGN activity \[Barger et al.\]. In the next Section, we investigate the implications of the obscured AGN hypothesis, by modelling their properties at sub-mm wavelengths. In Section 4, we predict the number of such sources expected from our models compared with the data described above, and put limits on their contribution to the far infra-red background. ## 3 Modelling Here we aim to use the obscured AGN models developed in \[Gunn & Shanks\] (1999) in order to make self-consistent predictions at sub-mm wavelengths. We assume that the luminosity absorbed at high energies is then reprocessed and emitted at lower energies, with a known thermal spectrum. The fluxes expected from such objects can then be estimated, from which we can predict sub-mm source counts and their contribution to the far infra-red background. The assumptions we have made are the following: * The intrinsic $`0.33.5\mathrm{keV}`$ X-ray luminosity of each source is known, and is assumed to come from the zero-redshift X-ray luminosity function, with parameters as described in Table 1. * The column density, $`N_H`$, perceived by the optical, ultra-violet, and X-radiation is assumed to be a measure of the intrinsic amount of dust present, and is not affected significantly by the viewing angle. The covering factor, $`f_{cov}`$, of this obscuring material is defined to be the fraction of lines of sight for which a constant $`N_H`$ is seen, and the remaining lines of sight are unobscured. * All the absorbed flux goes into heating up the dust and gas in the obscuring medium, whatever the assumed geometry, which is then re-radiated isotropically in the thermal infra-red. ### 3.1 Column density distribution Several different column density distributions have been proposed in order to explain the observed spectrum of the X-ray background. For example, Madau et al. (1994) used a bimodal model, consisting of a population of unabsorbed sources plus 2.5 times as many absorbed sources whose column densities had a Gaussian distribution with mean $`\mathrm{log}(N_H/\mathrm{cm}^2)=24.0\pm 0.8`$. \[Comastri et al.\] (1995) used four sub-classes of absorbed populations, each with a different normalization with respect to the unabsorbed population. Here we use two models, the first being a flat distribution of columns, which was found in \[Gunn & Shanks\] (1999) to give good agreement with the observed X-ray source counts and XRB spectrum for a $`q_0=0.0`$ universe. The second model has a tilted distribution, with a greater proportion of high columns than in the flat model, which gives better agreement with the data in a $`q_0=0.5`$ universe. The differences in the observed properties of each population of sources are attributable solely to the column density. The flat distribution uses seven populations evenly spaced in log space: $`\mathrm{log}(N_H/\mathrm{cm}^2)=19.5`$, $`20.5,21.5,22.5`$, $`23.5,24.5,25.5`$, each containing the same number and intrinsic luminosity function of AGN. The tilted distribution uses six populations, with the relative normalizations obeying the relation: $$\mathrm{\Phi }(N_H)=\left\{1+0.5\mathrm{log}\left(\frac{N_H}{10^{20}}\right)\right\}\mathrm{\Phi }^{}.$$ The two distributions are compared in Fig. 1. ### 3.2 X-ray luminosity function and evolution Since we are assuming that the observed sub-mm emission results from reprocessed nuclear X-ray and optical radiation, we take as a starting point a two power-law zero-redshift X-ray luminosity function (XLF): $$\mathrm{\Phi }_X(L_X)=\{\begin{array}{cc}\mathrm{\Phi }_X^{}L_{X_{44}}^{\gamma 1}\hfill & L_X<L_X^{}(z=0)\hfill \\ \mathrm{\Phi }_X^{}L_{X_{44}}^{\gamma 2}L_{X_{44}}^{(\gamma 2\gamma 1)}\hfill & L_X>L_X^{}(z=0),\hfill \end{array}$$ where $`\mathrm{\Phi }^{}`$ is the normalization of the XLF, and $`L_{X_{44}}`$ is the $`0.33.5\mathrm{keV}`$ X-ray luminosity in units of $`10^{44}\mathrm{erg}\mathrm{s}^1`$. Here the luminosity evolution is parametrized as either polynomial evolution: $$L_X^{}(z)=L_X^{}(0)\mathrm{\hspace{0.17em}10}^{(\gamma _zz+\gamma _z^{}z^2)},$$ or power-law evolution: $$L_X^{}(z)=L_X^{}(0)(1+z)^{\gamma _z},$$ where a maximum redshift, $`z_{\mathrm{𝑐𝑢𝑡}}`$, at which the evolution stops is incorporated, such that: $$L_X^{}(z)=L_X^{}(z_{\mathrm{𝑐𝑢𝑡}})z>z_{\mathrm{𝑐𝑢𝑡}}.$$ The parameters for the XLF and its evolution have been taken from the fits to ROSAT and Einstein EMSS QSO number counts by Boyle et al. (1994), and are listed in Table 1. The broad line QSOs used to define the XLF correspond to our populations with $`\mathrm{log}(N_H/\mathrm{cm}^2)=19.5\mathrm{and}20.5`$, so we normalize the populations accordingly. ### 3.3 Canonical X-ray/optical QSO spectrum The intrinsic QSO spectrum, $`F(E)`$, is assumed here to consist of two power-laws, with spectral indices $`\alpha _x=0.9`$ \[Nandra & Pounds\] and $`\alpha _{opt}=0.8`$ \[Francis\] at X-ray and optical energies respectively. In addition, the X-ray power-law is modified by the effects of reflection. The relative normalization is defined by a power-law of slope $`\alpha _{ox}=1.5`$ joining $`2\mathrm{keV}`$ and $`2500\mathrm{\AA }`$ (\[Tananbaum et al.\]; \[Yuan et al.\] 1998). The behaviour of the spectrum between these two regimes is not well known however. \[Zheng et al.\] (1997) investigate the far ultra-violet properties of a sample of high redshift QSOs and find that the radio-quiet QSO spectrum can be approximated by a broken power-law, where the spectral break occurs at $`1050\mathrm{\AA }`$ ($`0.01\mathrm{keV}`$). The spectral index at longer wavelengths is $`\alpha _{opt}0.86\pm 0.01`$ (c.f. $`\alpha _{opt}0.8`$ used here), steepening significantly to shorter wavelengths (see also \[Laor et al.\]). For simplicity, we therefore assume that the spectral break can be approximated by a step discontinuity at $`0.01\mathrm{keV}`$. ### 3.4 Absorbed X-ray/optical QSO spectrum The next step is to estimate the absorbed luminosity, $`L_{\mathrm{abs}}`$, responsible for the heating of the torus, for each column density used. At X-ray energies, the opacity is dominated by photo-electric absorption at columns of $`N_H<10^{24}\mathrm{cm}^2`$. Above $`N_H10^{24}\mathrm{cm}^2`$, the obscuring medium becomes Compton thick due to electron scattering, such that the effective optical depth is $`\tau _{\mathrm{eff}}=\tau _{\mathrm{ph}}+\tau _{\mathrm{es}}`$. Since the absorbed fraction of the luminosity is almost unity above $`N_H10^{24}\mathrm{cm}^2`$ due to photo-electric absorption alone (see Fig. 2), we do not take electron scattering into account here as it does not affect our results significantly. The photo-electric absorption coefficients \[Morrison & McCammon\] and can be evaluated using xspec \[Arnaud\]. In the optical, the dust extinction laws from \[Howarth\] (1983) and \[Seaton\] (1979) are used. A constant gas to dust ratio is implicitly assumed here. As the column increases, so does the amount of the continuum which is destroyed by these processes. The absorbed luminosity can be approximated by calculating the energy at which the optical depth is unity for each process, $`E(\tau _x=1)`$ and $`E(tau_{opt}=1)`$ respectively, as a function of the column density, and then by assuming that all the luminosity emitted between these two energies is absorbed. Using the relationship $`\tau (E)=\sigma (E)N_H`$, the cross-section scales as $`\sigma (E)=1/N_H`$, for an optical depth of unity. The absorbed luminosity is then calculated from: $$L_{\mathrm{abs}}=f_{cov}_{E(\tau _{opt}=1)}^{E(\tau _x=1)}F(E)\text{d}E.$$ The range of intrinsic luminosities is defined by the X-ray luminosity function (XLF), so the absorbed luminosity is normalized by the known X-ray luminosity, $`L_X`$: $$L_X=_{0.3\mathrm{keV}}^{3.5\mathrm{keV}}F(E)\text{d}E.$$ The fraction of the luminosity which is absorbed can then be calculated. Fig. 2 shows how the absorbed fraction increases with column density, saturating above $`N_H10^{25}\mathrm{cm}^2`$, as no radiation whatsoever can escape unaffected from the nucleus, and all the emitted energy goes into heating up the obscuring medium. ### 3.5 Far infra-red QSO spectrum Finally, we assume that all the absorbed radiation, $`L_{\mathrm{abs}}`$, has to escape as thermal emission from the dust in the far infra-red, and therefore that $`L_{\mathrm{FIR}}=L_{\mathrm{abs}}`$. We also assume that the FIR luminosity is isotropic, and therefore that the received flux is independent of viewing angle. For the case for which the obscuring medium is also isotropic, this is a realistic assumption, as self-shielding of the inner regions means that only radiation from the outer, cooler layers of dust will be received. However, for a toroidal geometry (e.g., $`f_{cov}0.5`$), radiation from the hot dust in the innermost regions will be able to escape from the top of the torus, and for face-on viewing angles, the spectrum will be broadened due to the superposition of components at a range of different dust temperatures \[Pier & Krolik\]. This has the effect of boosting the flux at short wavelengths ($`<10\mu \mathrm{m}`$) but does not change the spectrum significantly at the wavelengths at which we are primarily concerned ($`>100\mu \mathrm{m}`$), as the hot emission component makes little contribution, and the torus is optically thick. As the true geometry is still unclear, neglecting this effect means that the predictions from our models at short wavelengths should be taken to be lower limits. We therefore choose to approximate our obscuring medium by isothermal dust, emitting isotropically. By convention, at these energies, frequency units are used, and we write: $$L_{\mathrm{FIR}}=P(\nu _e)\text{d}\nu _e.$$ Assuming optically thin dust emission, the Planck function, $`B(\nu ,T)`$, is modified by an opacity law, where the opacity depends on both the dust grain composition and the size and shape distribution of the grains, and which can be parametrized as $`\kappa _d\nu ^\beta `$. Following \[Cimatti et al.\] (1997), we use the opacity law: $$\kappa _d=0.15\left(\frac{\nu _e}{250\mathrm{GHz}}\right)^2\mathrm{cm}^2\mathrm{g}^1.$$ The dust temperature is taken to be in the range $`30\mathrm{K}<T_d<70\mathrm{K}`$ (\[Haas et al.\]; \[Benford et al.\], 1999). Since the emitted FIR luminosity is constrained by the X-ray luminosity, the effect of increasing the assumed temperature means that the total normalization must be reduced in order to keep $`L_{\mathrm{FIR}}`$ constant, and vice versa. The emitted power, $`P(\nu _e)`$, can therefore be parametrized as follows: $$P(\nu _e)=4\pi \kappa _d(\nu _e)B(\nu _e,T_d)M_d,$$ (1) To calculate the received flux from such a source, we use the relationship: $$S(\nu _o)\text{d}\nu _o=\frac{P(\nu _e)}{4\pi D_L^2}\text{d}\nu _e=(1+z)\frac{P(\nu _e)}{4\pi D_L^2}\text{d}\nu _o,$$ where $`D_L`$ is the luminosity distance and $`\text{d}\nu _e=(1+z)\text{d}\nu _o`$, to give: $$S(\nu _o)=\frac{(1+z)\kappa _d(\nu _e)B(\nu _e,T_d)M_d}{D_L^2}.$$ (2) To calculate the integrated source counts as a function of flux density, we then use the same method as in \[Gunn & Shanks\] (1999), starting from the $`0.33.5\mathrm{keV}`$ XLF, and using the above relationships between $`L_X`$ and $`S(\nu _o)`$. Note that in the sub-mm regime, traditionally flux density is used, with units of Janskys, ($`1\mathrm{Jy}=10^{23}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1\mathrm{Hz}^1`$), rather than the broad-band flux used at X-ray energies (units: $`\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$), or optical magnitudes. ## 4 Sub-mm predictions Here we investigate the contribution to the sub-mm source counts and the far infra-red background predicted by our obscured AGN model. We look at the effects of changing certain parameters, such as the covering factor and temperature of the obscuring medium, the luminosity evolution of the AGN, and the maximum redshift at which these sources exist. At present, these parameters are not well constrained, particularly at high redshift. However, despite evidence that the space density of QSOs declines beyond $`z3`$ \[Shaver et al.\], we know that QSOs exist at high redshift. New search techniques are discovering more such objects all the time, for instance the three QSOs found recently by the SDSS Collaboration (Fan et al. 1998), all with redshifts in the range $`4.75<z<5.0`$. As we are using luminosity functions and evolutionary models determined from X-ray selected QSOs, these have diverging properties above $`z2`$, and these can therefore be taken to span the range of likely properties. By invoking the most extreme cases, we are able to put firm upper limits on the obscured AGN contribution to both the source counts and FIRB intensity. ### 4.1 The covering factor of the absorbing material In order to remain consistent with the assumed flat column distribution used in \[Gunn & Shanks\] (1999), we must also consider the covering factor of the obscuring material, as this affects the intrinsic distribution of column densities. Here, we take two examples, first that the obscuring material is isotropic, i.e., with a covering factor $`f_{cov}=1`$, and secondly that a torus covers half the sky as perceived by the nucleus for all sources, i.e., $`f_{cov}=0.5`$. We assume that these two values will span the true range of covering factors. Two intrinsic distributions giving rise to a perceived flat column distribution, as used in the $`q0=0.5`$ model, are shown in Fig. 3. A method for differentiating between these two proposed geometries for the obscuring material in AGN, is to determine the fraction which are highly luminous at sub-mm wavelengths. By definition, very little dust exists along the line-of sight to broad-line QSOs. Therefore, if the absorbing medium is isotropic, the dust content of broad-line QSOs must be intrinsically low, with low sub-mm emission. However, for a toroidal structure in the spirit of the Unified Model \[Antonucci\], broad-line QSOs could contain large amounts of dust in a plane perpendicular to the line of sight. This is consistent with the large infra-red bump observed in the $`3000\mathrm{\AA }`$ to $`300\mu \mathrm{m}`$ spectrum of a sample of Palomar Green QSOs \[Sanders et al.\]. Therefore, if all QSOs contain large quantities of dust, they will be strong sub-mm sources. Proposed sub-mm observations of X-ray selected QSOs using SCUBA/JCMT will shed further light on this question in the near future. ### 4.2 The effects of changing the XLF parameters We first investigate the predicted source counts for the obscured QSO model with a flat distribution of columns, integrated over redshifts $`0<z<5`$. The X-ray luminosity function used is evolved according to the power-law and polynomial models from \[Boyle et al.\] (1994), for both $`q_0=0.0`$ and $`q_0=0.5`$ cosmologies. The parameters of these models are detailed in Table 1. First, we take as our fiducial model a power-law prescription for the luminosity evolution, and a $`q_0=0.0`$ cosmology, which we denote as POW(0.0). This model will be used hereafter, unless stated otherwise. In Fig. 4, we show the source counts predicted by this model, compared with the SCUBA data described in Section 2 and Fig. 4. A covering factor of $`f_{cov}=1`$ is assumed, for a dust temperature of $`T_d=30\mathrm{K}`$, integrated over $`0<z<5`$. The contribution from each population of obscured QSOs is shown by a dashed line, with the total source counts denoted by a solid line. The reverse trend to that observed at X-ray energies is seen here: the populations with the smallest column densities give the lowest sub-mm fluxes, with the high column density populations dominating the sub-mm source counts due to the large dust masses present. This model predicts $`20`$ per cent of the source counts at $`2\mathrm{mJy}`$, and flattens off at fainter fluxes, but will provide $`17`$ per cent of the FIRB at $`850\mu \mathrm{m}`$. In Fig. 5, the predictions for all four Boyle et al. models are shown, for covering factors of (a) $`f_{cov}=1`$ and (b) $`f_{cov}=0.5`$, using the intrinsic column distribution that each covering factor implies. As expected, the low $`q_0`$ models give a much larger contribution to the source counts than the $`q_0=0.5`$ models, due to volume effects. Taking the measured value for the $`850\mu \mathrm{m}`$ FIRB intensity of $`I_{\mathrm{FIRB}}=5.0\times 10^{10}\mathrm{W}\mathrm{m}^2\mathrm{sr}^1`$ from \[Fixsen et al.\] (1998), we calculate the fraction of the FIRB which can be accounted for by our obscured QSOs. Low $`q_0`$ models provide $`817`$ per cent of the FIRB intensity, compared with $`38`$ per cent for the $`q_0=0.5`$ models, as shown in Table 2. The difference in the predicted number counts and background intensity between the $`f_{cov}=1`$ and $`f_{cov}=0.5`$ cases is due to the increased intrinsic far infra-red luminosity of sources which have an isotropic absorbing medium, as a greater fraction of the nuclear radiation is intercepted. These intrinsically brighter sources are therefore detected at higher fluxes, increasing the bright-end number counts. However, at faint fluxes, the predictions are similar for both assumed covering factors, implying that we are observing high redshift objects, which by definition will be the most luminous sources in either case. For an isotropic covering medium, the predicted source counts of obscured AGN at $`2\mathrm{mJy}`$ account for between 8 and 20 per cent of the total counts, whereas for the anisotropic case, the numbers drop by about a third, since the objects are less luminous. The contribution to the intensity of the FIRB is $`313`$ per cent in the $`f_{cov}=0.5`$ case, which is about $`25`$ per cent lower than the $`417`$ per cent predicted for $`f_{cov}=1`$. ### 4.3 The effects of changing $`z_{max}`$ and $`T_d`$ If we were to assume that obscured QSOs exist out to redshift $`z_{max}10`$, then we can make a much larger contribution to the source counts and FIRB intensity, as shown for $`T_d=30\mathrm{K}`$ and $`f_{cov}=1`$ in Fig. 6(a). However, it is difficult to envisage a scenario in which large numbers of dusty QSOs form at such early epochs \[Efstathiou & Rees\], and therefore this places a useful constraint on the maximum contribution such models can provide. Obscured QSOs from $`0<z<10`$ could contribute over $`30`$ per cent of the \[Fixsen et al.\] (1998) measurement of $`I_{\mathrm{FIRB}}`$. There are very few complete sub-mm surveys of AGN at low and high redshifts in the literature, with enough data to constrain the dust temperatures in such objects. The uncertainties involved in making such estimates are high, where assumptions have to be made about the cosmology, the dust masses and the opacities, and in addition, observations at several different wavelengths are required in order to start to put limits on the parameters in the models. Treating the dust as isothermal is likely to be an oversimplification, but will be adequate for our purposes, and therefore we estimate that the range $`30\mathrm{K}<T_d<70\mathrm{K}`$ should span the most probable temperatures. For each column density used, the FIR luminosity is in effect fixed by the known amount of nuclear X-ray and optical luminosity which has been absorbed. If we increase the assumed dust temperature, the mass of the dust must be decreased in order to keep $`L_{\mathrm{FIR}}`$ constant. As the temperature is increased, the peak of the thermal radiation moves to higher frequencies, thereby reducing the intensity of the source at sub-mm energies. In Fig. 6(b), we show the effect of varying the temperature on the sub-mm counts, where the highest number of sub-mm sources are observed for the model with the lowest dust temperature. If the dust around most AGN is warm, $`T_d70\mathrm{K}`$, then the contribution to the $`850\mu \mathrm{m}`$ FIRB will be negligible, whereas at $`100\mu \mathrm{m}`$, their impact will be more significant. ### 4.4 Tilted column distributions for $`q_0=0.5`$ models One further test here for the obscured QSO model is to look at the tilted distribution of column densities, which was invoked in order to obtain a better fit to the XRB for high density, $`q_0=0.5`$, models for the XLF. By skewing the distribution of objects towards higher column densities, this could have the effect of overpredicting the source counts and background intensity. However, in Fig. 7, it can be seen that the source counts predicted with this distribution for a range of models lie well below the observed counts, and flatten off towards fainter fluxes, again primarily due to volume effects. The maximum contribution to $`I_{\mathrm{FIRB}}`$ from these models is $`15`$ per cent, for model POW(0.5)t with $`T_d=30\mathrm{K}`$ and $`f_{cov}=1`$. ### 4.5 Number-redshift distributions In Fig. 8, we plot the number-redshift distributions predicted by our obscured QSO models, for an $`850\mu \mathrm{m}`$ survey to $`2\mathrm{mJy}`$ (c.f. the $`1\sigma `$ confusion limit for JCMT/SCUBA of $`0.44\mathrm{mJy}\mathrm{beam}^1`$; Blain, Ivison & Smail 1998). Again, we use a dust temperature of $`T_d=30\mathrm{K}`$, and consider sources out to redshift $`z_{max}=5`$, and make predictions for both (a) $`f_{cov}=1`$ and (b) $`f_{cov}=0.5`$. We have taken four models with a flat column distribution of seven populations of obscured AGN, and two models using a tilted distribution of six populations, as described in Section 3.1 (all for $`f_{cov}=0.5`$). In the $`q_0=0.0`$ cosmology, the number of sub-mm sources predicted at redshifts $`z>3`$ is much higher than for $`q_0=0.5`$, consistent with the steeper faint-end slope of the LogN:LogS relation for a low $`q_0`$ universe. Once more source identifications become available, it will be very straightforward to determine the form of the luminosity evolution, due to the fact that it is as easy to detect a sub-mm source at high redshift as at low redshift, and therefore if large numbers of high redshift obscured QSOs exist, they will be found in deep SCUBA surveys. The follow-up identification programs for the latest SCUBA sub-mm surveys are nearing completion, and the source catalogues will soon be published. We shall therefore be able to compare the numbers of AGN detected with the redshift distributions predicted by our models. It is interesting to note that $`10`$ per cent of the optical counterparts to sub-mm sources are classified as Extremely Red Objects (EROs; \[Hu & Ridgway\]), which are found in $`K`$– band images of the SCUBA error boxes \[Smail et al.\]. Deep radio maps provide more accurate positional information than the SCUBA maps, assuming that the radio and sub-mm emission is due to the same mechanism, and the ERO counterparts are confirmed by the radio data. It will be very difficult to obtain spectroscopic redshift information about these objects, even with a 10-m class telescope, as they are so faint in the optical, with $`I25`$. However, near infra-red spectroscopy has been used successfully to obtain a redshift of $`z=1.44`$ for the ERO HR10 \[Dey et al.\]. Potentially, these could be examples of very highly obscured AGN at high redshift, for which in the optical and near infra-red we see the dusty host galaxy (hence the very red colours), and only see evidence for the AGN through the sub-mm and radio emission. Future deep surveys with AXAF and XMM, such as the proposed AXAF and SCUBA observations of the SSA 13 field (Cowie et al.), will provide a vital test of these theories, in searching for faint X-ray emission associated with these sources. However, unlike in the sub-mm regime, the X-ray $`k`$-correction is not working in our favour, and if we assume that these sources are at high redshift, then this is still an extremely ambitious project. ### 4.6 Predicted spectrum of the far infra-red/sub-mm background We have shown from the sub-mm number counts predicted by our models that obscured AGN provide a small but non-negligible fraction of the intensity of the far infra-red background at $`850\mu \mathrm{m}`$, ranging from 1 to 33 per cent. Fig. 9 shows the spectrum predicted from our flat distribution of column densities using a covering factor $`f_{cov}=1`$, a dust temperature $`T_d=30\mathrm{K}`$, and integrated over redshifts $`0<z<5`$. It can be seen that all four models predict the same intensity at high frequencies. This is to be expected, as the emission here is dominated by low redshift objects for which the rest-frame peak of the $`30\mathrm{K}`$ thermal spectrum is at $`100\mu \mathrm{m}`$. Since the model parameters have been obtained from fits of the X-ray luminosity function to X-ray selected QSOs from ROSAT and the Einstein EMSS, which have relatively low median redshifts of $`z1.5`$ and $`z0.2`$ respectively, we would expect the predictions to diverge at high redshift and therefore low frequencies. In Fig. 10, we show the effects of changing the maximum redshift and the dust temperature used in the models. In panel (a), it can be seen that the low redshift sources contribute at higher frequencies around $`100\mu \mathrm{m}`$, whereas the high redshift sources account for the largest fraction of the FIRB intensity at $`850\mu \mathrm{m}`$. In panel (b), we show how as the dust temperature is increased, the peak of the predicted FIRB background moves to higher energies. The integrated luminosity is fixed by the amount of absorbed nuclear X-ray and optical radiation, and is independent of temperature. Since this is a spectral energy distribution diagram, in which equal areas mean equal energies, then the predicted intensities for each model are identical after a shift along the frequency axis. A dust temperature of $`T_d40\mathrm{K}`$ would give the maximum contribution to the peak of the observed FIRB. ## 5 Discussion The first question to address is what these obscured AGN will look like at wavelengths other than the sub-mm. Compared with the majority of the X-ray source population, sub-mm sources have much higher dust masses and therefore much higher column densities. Hence in general, the optical and X-ray nuclear emission will be totally obscured, and the counterparts are likely to appear like relatively “normal” galaxies. These galaxies may perhaps have narrow emission lines in their optical spectra, possibly of high ionization species, or may look very dusty from their optical and infra-red colours. However, if the obscuring material is not assumed to be isotropic, then a fraction of these highly obscured sources will be orientated such that our line of sight lies within the opening angle of the torus, and therefore the X-ray and optical nuclear flux will escape unattenuated, while simultaneously a large sub-mm flux is detected from the dust in the torus. The identification of optical counterparts to sub-mm sources has similar problems to those encountered with X-ray data, in that the point-spread function of the telescope is large (Half Power Beam Width $`=14\mathrm{}.7`$ at $`850\mu \mathrm{m}`$ for SCUBA/JCMT), and therefore a number of plausible counterparts can lie in the error box. As sub-mm sources often have associated radio emission, deep radio maps of sub-mm survey fields have been taken (\[Ivison et al.\] in preparation; \[Richards\] 1998). Star-formation regions are expected to contain many supernova remnants, known to be strong radio emitters, and even radio-quiet AGN are likely to be detected as very faint, $`\mu \mathrm{Jy}`$ sources in extremely deep radio maps. The high angular resolution of the radio data combined with the fact that radio emission is not affected by the presence of dust, means that optical counterparts to the sub-mm sources can be found in a much less ambiguous manner. Once the counterpart has been found, the mechanism giving rise to the sub-mm emission must then be determined. By assuming that for starburst galaxies, the star-formation rate controls both the radio emission and the thermal sub-mm emission, \[Carilli & Yun\] (1999) use the radio to sub-mm spectral index as a redshift indicator. If however, an independent measure of the source redshift can be obtained, then the radio to sub-mm spectral index can be used to infer whether there is any additional contribution to either component due to the presence of an AGN. A radio-loud AGN will have proportionally higher radio emission than a starburst galaxy, whereas an obscured radio-quiet AGN will have lower radio and higher sub-mm emission. At present, there is little information available with which to constrain the dust temperature or range of temperatures that should be adopted for our models. Here, we have assumed that $`T_d`$ remains constant with redshift, but as shown in Fig. 6, the temperature affects the predicted source counts significantly. The advent of larger samples of AGN at both low and high redshifts with multi-wavelength FIR/sub-mm data will enable any evolution of the dust temperature to be constrained. Having chosen to keep $`T_d`$ constant with redshift, and a universal opacity law, the only unconstrained parameter defining the luminosity is therefore the dust mass, $`M_d`$, (see Eqn 1). Therefore, since the FIR luminosity is directly related to the X-ray luminosity, which we have modelled to undergo pure luminosity evolution with redshift, then one possible interpretation is that the dust mass also evolves with redshift, with the form $`M_d(1+z)^3`$. If the dust mass does not scale with the luminosity, then in order for our models to hold, there must be a balance between dust mass and temperature, with the mass compensating for changes in temperature. A further consideration is the question of whether the assumption of constant gas to dust ratio at all redshifts is appropriate. For a QSO, there exists a radius within which the radiation field is so intense that dust particles will not survive, the so-called dust sublimation radius. Granato, Danese & Franceschini (1997) proposed that a significant quantity of gas exists inside the dust sublimation radius, and that a large proportion of the photo-electric absorption occurs within this region. This would have the effect of reducing the dust masses calculated using the column densities inferred from the observed photo-electric absorption at X-ray energies. This would in turn, lower the sub-mm fluxes from obscured AGN, cutting their contribution to the source counts and the intensity of the FIR background. However, the calculation of the gas to dust ratio for SMM 02399-0136 at redshift $`z=2.8`$ by \[Frayer et al.\] (1998), and SMM 14011+0252 at redshift $`z=2.6`$ by \[Frayer et al.\] (1999), from measurements of CO line emission combined with the sub-mm flux, gives a value similar to that found in nearby galaxies, from which they infer that CO emitting sources at high-redshift have already undergone significant chemical evolution. We have therefore chosen to adopt a constant gas to dust ratio in our models, since at present the available observations are insufficient to constrain any suitable alternatives. ## 6 Future observational tests of the models The next decade promises to bring enormous advances in this field, with the advent of innovative instrumentation combined with the light-gathering capacity of 10-m class telescopes, plus the new generation of satellite-bourne detectors. In the near infra-red, large area surveys will be possible with new wide-field cameras, such as the Cambridge Infra-Red Survey Instrument (CIRSI; \[Beckett et al.\]), allowing follow-up observations of the entire survey area for existing and future X-ray surveys, rather than the pointed observations of individual sources used previously. In the far infra-red, the FIRBACK survey \[Lagache\] is an ISOPHOT $`175\mu \mathrm{m}`$ survey of $`4\mathrm{deg}^2`$ of sky at high Galactic latitudes, with the aim of determining the source populations making up the far infra-red background. Proposed satellite-borne mid and far infra-red observatories include the NASA Space Infra-Red Telescope Facility (SIRTF; \[Werner\]), and the ESA Far Infra-Red and Sub-millimetre Telescope (FIRST; \[Pilbratt\]; \[Genzel\]) scheduled for launch in 2001 and 2007 respectively. In the sub-mm regime, a proposed wide area survey with SCUBA plans to cover a subset of the European Large Area ISO Survey region (ELAIS; \[Oliver et al.\]) of around $`640`$ square arcminutes ($`0.178\mathrm{deg}^2`$). The survey will have a brighter flux limit than existing deep pencil-beam surveys, and aims for a $`3\sigma `$ detection threshold of $`8\mathrm{mJy}`$, from which $`40`$ sources are expected. Our predictions for the number-redshift distribution for such a survey are presented in Fig. 11(a) for $`f_{cov}=1`$, with the more conservative predictions using $`f_{cov}=0.5`$ in Fig. 11(b). The expected number of sources ranges between $`14\mathrm{deg}^2`$ (POL(0.5), $`f_{cov}=0.5`$, $`z_{\mathrm{𝑚𝑎𝑥}}=5`$) and $`180\mathrm{deg}^2`$ (POW(0.5)t, $`f_{cov}=1`$, $`z_{\mathrm{𝑚𝑎𝑥}}=5`$), depending on the evolution and column density distribution used in the model. Clearly, if the obscuring torus is also a site of active star-formation and the sub-mm flux of each component is comparable \[Frayer et al.\], then the number of AGN may be somewhat higher than predicted on the basis of the above model. ## 7 Conclusions In this paper, we have extended the obscured QSO model for the X-ray background of \[Gunn & Shanks\] (1999) to the sub-millimetre regime, by considering the fate of the X-ray, ultra-violet and optical energy absorbed by the obscuring medium. This energy goes into heating up the dust in the obscuring material, which then radiates thermally at far infra-red and sub-millimetre wavelengths. We have modelled the obscuring medium as either isotropic or having a toroidal geometry, which then dictates the intrinsic column density distributions which are consistent with the line-of-sight distributions found from X-ray and optical observations. Since the spectrum in the sub-mm is rising steeply to higher frequencies, the $`k`$-correction obtained is such that our obscured QSOs are equally visible for redshifts of $`1<z<10`$. We therefore use the observed sub-mm source counts at $`850\mu \mathrm{m}`$ and the spectrum of the far infra-red background to constrain our models, by ensuring that the large quantities of cool dust invoked would not exceed the observed emission. We have shown that a variety of plausible obscured AGN models, which provide good fits to the X-ray background spectrum and the number counts at soft and hard X-ray energies, are consistent with the observed sub-mm source counts and intensity of the FIRB. The models predict between 1 and 33 per cent of the FIRB intensity, and a similar fraction of the number counts, depending on how extreme is the model, but with the more conservative models predicting between 5 and 15 per cent. This is in good agreement with the fact that the majority of sub-mm sources are identified as starburst galaxies, providing the complementary sources. In addition, the obscured AGN models may be a suitable candidate for the source of the sub-mm emission associated with EROs. Finally, we have made predictions of the redshift distribution of sub-mm sources from our models, for both existing and proposed surveys, and described how combined X-ray and sub-mm survey data will be able to determine the extent of the obscured QSO contribution to cosmological backgrounds at both high and low energies. ## Acknowledgments This paper was prepared using the facilities of the STARLINK node at Durham. KFG acknowledges receipt of a PPARC studentship. We thank Chris Done and Ian Smail for useful discussions.
no-problem/9909/astro-ph9909463.html
ar5iv
text
# 1 Introduction ## 1 Introduction During the past twenty years, there has been a growing appreciation of the strong biases against finding galaxies of low surface brightness (Disney, 1976). These biases arise because the night sky is not particularly dark: because of the brightness of the sky background, the ability to detect a galaxy depends not only upon the integrated luminosity of the galaxy, but also upon the contrast with which the galaxy stands out above the Poisson fluctuations in the background. Thus our knowledge about the real galaxies population is incomplete: outside the Galaxy, the vast majority of informations about stellar population, kinematics, dark matter content, star formation and large scale clustering, have up to now been mostly obtained from high surface brightness galaxies (HSBGs) studies. The discovery and search of LSBGs constitutes, therefore, a fundamental enterprise in order to reach a better and more complete understanding of all these cosmological arguments. LSBGs have indeed a number of remarkable properties which distinguish them from the more familiar Hubble sequence of spirals (Impey $`\&`$ Bothun, 1997); the main ones are (1) LSBGs seem to constitute at least 50$`\%`$ of the total galaxy population in number and this has strong implications on the faint end slope of the luminosity function, on the baryonic matter density and expecially on galaxy formation scenarios; (2) LSBGs discs are among the less evolved objects in the local universe since they have very slow SFR; (3) LSBGs are embedded in dark matter halos which are of lower density and more extended than the haloes around HSBGs and are strongly dominated by dark matter at all radii. ## 2 Method To search for galaxies with surface density flux close to the sky noise level requires first an image manipulation. Detection of LSBGs using standard algorithms which select connected pixels above a threshold on the original images fails, due to the low signal to noise ratio of these objects. Preliminar plate processing is indeed needed to obtain a final image where S/N ratio is enhanced. The algorithm we developed for detecion of LSBGs, consists of different steps which can be summarized as follows: (1) filtering of large scale background fluctuations (2) removal of stars and standard astronomical objects (3) convolution of the cleaned image with ad hoc filters (4) classification of the candidate objects detected in the previuos step. A very important consideration for LSBGs detection is to ensure that the sky is as smooth as possible across each region of study. In order to achieve the required flatness we use a procedure which is part of the package SExtractor: the technique consists in creating a map of the background sky in which each pixel takes the mean value (the mode for crowded fields) of the pixels in the original frame in an $`n\times n`$ box surrounding the pixel in question. This map is then subtracted from the original data, producing a sky-subtracted image. It should be noticed that any object whose size is equal to or grater than the filter box size will be lost, or at least severely degraded. We chose a mesh size of $`128\times 128`$ pixels on DPOSS so to preserve objects having dimensions up to 2 arcmins. Removal of bright stars and standard astronomical objects is a very important step to be done before performing convolutions: the image must be cleaned from objects that could simulate LSBGs when convolved with the filters. Removal is performed by replacing these objects with the local mean sky value taken with its noise. We use SExtractor to detect all the standard objects in the plate and select the ones to reject using a criterion based on dimension (isophotal area) and peak flux (surface isophotal flux weighted by peak flux), that clearly discriminates a stellar locus, a region of saturation and a region occupied by diffuse objects like galaxies. We then clean the objects and performe several convolutions of the cleaned image with purposely designed filters. We apply a combination of filters over different scales (as LSBGs have a wide range of possible scale lenght) and obtain a final significance image where every different scale is emphasized at the same time. We use this image as a map of positions of candidate LSBGs. The filters we implemented have a compensated exponential profile After this procedure, in the significance map LSBGs’ have very high signal-to-noise ratio. However the sample of candidates obtained in this way is still contaminated. Subsequent discrimination between good candidates and spurious objects and classification is done by studying radial luminosity profiles, since LSBGs discs have mainly exponential profiles. Computer simulations were used to determine the efficiency of the method and were performed in two different ways: first by using already known LSBGs and submerging them in a progressively growing sky noise; second by adding on the original images artificial LSBGs characterized by defined central surface brightness ($`\mu _{}`$) and scale lenght ($`\alpha `$). At present time, the classification efficiency is lower than the detection one, as the classification is performed on the original more noisier images and work is in progress to improve this aspect. ## 3 First results and advantages of the method Data used in this work consist of photographic fields (75 square degrees analyzed until now) from DPOSS plates in three filters (photographic J,F,N). In fig. 1 we report the regions covered by previous surveys (the three most important existing catalogues of LSBGs) as a function of the two structural parameters ($`\alpha `$,$`\mu _{}`$) compared with our results. It is important to notice that one field we studied has already been analyzed in the first catalogue made by Schombert $`\&`$ Bothun (1988) and our algorithm finds the three known objects at high significance ($`15÷20`$ $`\sigma `$) plus 6 other new ones (fig. 2). Our procedure can therefore detect even smaller and fainter objects than those until now catalogued on plates (see the solid line representing the detection limit of the procedure obtained by simulations, in fig 1, and the lined regions). Other advantages of the method are that (1) it is almost completely automatized and can easily manage wide field imaging data, as well as repeatable and objective detection; (2) the dataset (POSS II) covers the entire northern sky in 3 filters (J,F,N) and this allow independent search in different filters (avoiding biases in colours) and the possibility to build a large catalog of LSBGs of known selection function, obtaining a significant statistical sample.
no-problem/9909/hep-ph9909468.html
ar5iv
text
# Nucleon Form Factors ’99Report on the Form Factor session held during the Nucleon99 workshop, Frascati, Italy, June 1999. ## 1 THEORETICAL BACKGROUND There is now a long history of continuous progress in the understanding of electromagnetic form factors at large momentum transfer. After the pioneering works leading to the celebrated quark counting rules, the understanding of hard scattering exclusive processes has been solidly founded . A perturbative QCD subprocess is factorized from a wave function-like distribution amplitude $`\phi (x_i,Q^2)`$ ($`x_i`$ being the light cone fractions of momentum carried by valence quarks), the $`Q^2`$ dependence of which is analysed in the renormalization group approach. Although an asymptotic expression emerges from this analysis for the $`x`$ dependence of the distribution, it was quickly understood that the evolution to the asymptotic $`Q^2`$ is very slow and that indeed some non pertubative input is required to get reliable estimates of this distribution amplitude at measurable $`Q^2`$. The severe criticism that most of the contributions to the form factor were coming from end-point regions in the $`x`$ integration, especially when very asymmetric distribution amplitudes were used was answered by Li and Sterman who proposed a modified factorization formula which takes into account Sudakov suppression of elastic scattering for soft gluon exchange. The resulting formula is, for the pion form factor: $$F=16\pi C_F𝑑x𝑑yb_1𝑑b_1\widehat{\psi }(x,b_1)b_2𝑑b_2\widehat{\psi }(y,b_2)\alpha _ST_H(b_1,b_2,x,y),$$ (1) The integration range for the light-cone fractions of momentum $`x`$ and $`y`$ goes from $`0`$ to $`1`$. The functions $`\widehat{\psi }(x,b)`$ contain a Sudakov form factor which suppresses contributions from large transverse distances $`b`$. This improvement leads to an enlargement of the domain of applicability of perturbative QCD calculations of exclusive processes. Whether accessible data may be understood within this formalism, is not yet clear and different strongly motivated conclusions have been stressed . Let us briefly comment on this. * Perturbative corrections are still unknown, and it would not be a great surprise if they give some enhancement factor; remember the K-factor of the Drell-Yan process. * The description of transverse size effects through Sudakov factors and through intrinsic $`k_T`$ effects in the wave function may give rise to some double counting effects. The phenomenology of Sudakov suppression factors at moderate transfers is basically unknown. * The Feynman ’soft’ process may be a way to rephrase the perturbative calculation in some kinematical domain where this latter is not sound. It does however not seem logical to advocate Sudakov suppression of the perturbative process and not estimate the corresponding suppression factor in the soft case * The concept of nuclear filtering may turn to be very useful to the understanding of the free nucleon data. The relative contributions of short distance dominated versus soft processes should indeed be differentiated by the color transparency phenomenon. Selecting events where the outgoing hard scattered proton is not subject to final state interactions is indeed equivalent to selecting compact configurations which are characteristic of the short distance process. In conclusion, it is fair to say that nobody now believes that form factors are sufficient to determine the proton distribution amplitude. A comprehensive analysis of many more data on different exclusive reactions at large transfers are needed. This in turn necessitates high luminosity high duty factor medium energy accelerators . ## 2 TIMELIKE REGION The difference between the timelike and spacelike meson form factors has been analysed in the framework of perturbative QCD with Sudakov effects included (but only in the simpler meson case). In the timelike region, the amplitude for the hard process ruling $`\gamma ^{}\pi ^+\pi ^{}`$ is simple to deduce from the spacelike formula: $$T_H=16\pi \alpha _SC_F\frac{xQ^2}{xQ^2+𝐤^2i\epsilon }\frac{1}{xyQ^2+(𝐤𝐥)^2i\epsilon },$$ (2) changing $`Q^2W^2`$. The new feature with respect to the spacelike form factor is that the contour of transverse momenta integration now goes near poles located at either: $`𝐤^2=xW^2+i\epsilon `$ or: $`(𝐤𝐥)^2=xyW^2+i\epsilon `$. Technically, these poles are, except in the end point regions ($`x,y0`$), far from the bounds of integration of the two independent variables $`k=|𝐤|`$ and $`K=|𝐤𝐥|`$. Therefore, we may evaluate the integral by deforming the contour of integration in the complex plane of each of these variables. The result of this analysis (for the meson case) is that the asymptotic behavior is the same in the timelike and spacelike regions but that the approach to asymptotia is quite slow and a rather constant enhancement of the timelike value is expected at measurable large $`Q^2`$. This study should be enlarged to the nucleon case where such an enhancement is clearly shown by experimental data . ## 3 EXPERIMENTAL PROGRESS Up until only a few years ago, the quality of available data on nucleon form factors was quite limited, except for those on the magnetic form factor of the proton $`G_M^p`$, which had been accurately studied up to over 30 (GeV/$`c`$)<sup>2</sup>. Accurate measurements of the electric form factor of the proton $`G_E^p`$ were restricted to $`Q^2`$-values below 1 (GeV/$`c`$)<sup>2</sup>, because of the $`Q^2`$-weighting of the contribution from $`G_M^p`$ in the Rosenbluth-separation technique. Studies of both neutron form factors had to use elastic or quasielastic scattering off a deuteron, whereby the subtraction of the contribution from the proton caused sizeable systematic uncertainties in the analysis. It has been known for quite some time , that the quality of the data would be improved significantly by scattering polarized electrons either from a polarized target or from an unpolarized target while measuring the polarization of the recoiling or knocked-out nucleon. However, it has only been in the last few years that polarized beams became available with high polarization and intensity and the required polarized targets and recoil polarimeters were developed. This has resulted in a first batch of new data with high precision. $`G_E^n`$ has been measured at low $`Q^2`$ at Mainz and NIKHEF using polarized deuteron and $`{}_{}{}^{3}He`$ targets and neutron polarimeters. All these experiments used large-acceptance detectors to compensate for the still limited luminosity, which required extensive Monte Carlo analysis techniques. Nuclear corrections, which for the deuteron turned out to be sizeable at very low $`Q^2`$-values, amounted to $`50\%`$ for $`{}_{}{}^{3}He`$ at $`Q^2`$ $`0.35`$ (GeV/$`c`$)<sup>2</sup>, but now $`G_E^n`$-data are available with an accuracy of $`15\%`$ up to 0.65 (GeV/$`c`$)<sup>2</sup>. $`G_E^p`$ has been measured in Hall A at JLab in a $`Q^2`$-range up to 3.5 (GeV/$`c`$)<sup>2</sup> using a focal-plane polarimeter to measure the polarization of the recoiling proton. The data with a statistical and systematic accuracy of less than $`8\%`$ show that $`G_E^p`$ decreases with $`Q^2`$ relative to $`G_M^p`$ and the dipole prediction, indicating that the spatial distribution of the charge inside the proton extends further than that of the magnetization. $`G_M^n`$ has been accurately measured up to 0.8 (GeV/$`c`$)<sup>2</sup> at Mainz and Bonn by measuring the ratio of neutron to proton knock-out from unpolarized deuterium. However, the two data sets do not overlap within their error bars and a new measurement of $`G_M^n`$ in a similar $`Q^2`$-range has recently been performed at JLab , by studying quasi-elastic scattering of polarized electrons from a polarized $`{}_{}{}^{3}He`$ target. Further experiments have already been scheduled, or can be expected in a more distant future, to improve the accuracy and/or extend the $`Q^2`$-range of the existing data set. $`G_E^n`$ will be measured at JLab up to $`Q^2`$ $`2`$ (GeV/$`c`$)<sup>2</sup> in two separate experiments , one using a neutron polarimeter, the other a polarized deuterium target. The BLAST detector at the MIT-Bates facility will provide very accurate data in a lower $`Q^2`$-range, up to $`0.8`$ (GeV/$`c`$)<sup>2</sup>. The JLab $`G_E^p`$ data set will be extended first to 6 (GeV/$`c`$)<sup>2</sup> with the same set-up as used in the first experiment , later to 10 (GeV/$`c`$)<sup>2</sup> using a lead-glass calorimeter for the detection of the scattered electron . ## 4 CONCLUSION New data on nucleon form factors with an unprecedented precision have (and will continue to) become available in an increasing $`Q^2`$ domain. However, it is still difficult to make a precise statement on the applicability of improved perturbative calculations of the proton form factor at available momentum transfers. Future experience, to be gained from experiments at JLab at higher energies and at proposed dedicated machines , will provide essential information.
no-problem/9909/chao-dyn9909009.html
ar5iv
text
# Lyapunov exponents and Kolmogorov-Sinai entropy for a high-dimensional convex billiard ## I Introduction Billiards are simple yet nontrivial examples of systems that display classically chaotic motion. Of special importance are the Sinai billiard and the Bunimovich stadium since they are known to be completely chaotic. Interestingly, these two systems exhibit two different mechanisms that generate chaos. While dispersion is the chaos-generating mechanism in the Sinai billiard it is defocusing that leads to chaotic dynamics in the Bunimovich stadium. Dispersing yields a permanent divergence of neighbored trajectories. Defocusing may occur upon reflection at a focusing boundary element. Provided the free path is sufficiently long nearby trajectories start to diverge after passing through the focusing point, and on average the divergence might exceed the convergence thus leading to exponential instability. Dispersing billiards are well known also in higher dimensions. Popular examples are the three-dimensional Sinai billiard and the hard sphere gas. However, it was not until recently that completely chaotic billiards were constructed in more than two spatial dimensions that rely entirely on the defocusing mechanism . These billiards use spherical caps as the focusing elements of the boundary. A trajectory diverges mainly in a two-dimensional plane that is defined by the points of consecutive reflections with the spherical cap, and focusing may be very weak in the transversal directions. This makes it more difficult to create truly high-dimensional chaos in focusing billiards than in dispersing ones. Sufficient conditions for the construction of high-dimensional focusing billiards were given in ref., but it was found that these are not necessary ones. Besides of their intrinsic interest billiards are important model systems in the field of quantum chaos and statistical and fluid mechanics . Questions related to chaos, ergodicity, transport and equilibration are often studied in billiard models, see e.g. refs.. While two-dymensional chaos is fairly well understood by now, much less is known in high-dimensional systems. Recently, a high-dimensional billiard model has been proposed in the context of nuclear physics and quantum chaos . Within this model a self-bound $`N`$-body system is realized as a convex billiard. Numerical computations yielded a positive largest Lyapunov exponent and showed that this system is predominantly chaotic. In this article we want to extend previous calculations and compute the full Lyapunov spectrum and the KS entropy for this chaotic $`N`$-body system. These quantities characterize the degree of hyperbolic instability in dynamical systems and may be related to transport coefficients in non-equilibrium situations . Since the studied billiard is convex, defocusing is the only possible source causing this instability . This makes it interesting to examine this mechanism in more detail and compare to the situation of defocusing billiards with spherical caps. The results of such an investigation are not only of theoretical interest but may also be useful for further applications. We have in mind general questions concerning chaos in self-bound many-body systems like nuclei or atomic clusters and its influence on equilibration, damping or transport processes. This paper is organized as follows. In the next section we describe the model system and the techniques used to compute the Lyapunov exponents. The third section contains the results of our numerical computations for various system sizes $`N`$. In section four we investigate the defocusing mechanism in more detail. We finally give a summary. ## II High-dimensional billiard and Lyapunov exponents Let us consider a classical system of $`N`$ particles with Hamiltonian $$H=\underset{i=1}{\overset{N}{}}\frac{p_i^2}{2m}+\underset{i<j}{}V(|\stackrel{}{r}_i\stackrel{}{r}_j|),$$ (1) where $`\stackrel{}{r}_i`$ is a two-dimensional position vector of the $`i`$-th particle and $`\stackrel{}{p}_i`$ is its conjugate momentum. The interaction is given by $`V(r)=\{\begin{array}{cc}0\hfill & \text{for }r<a,\hfill \\ \mathrm{}\hfill & \text{for }ra.\hfill \end{array}`$ (4) Thus, the particles move freely and interact whenever the distance between a pair of particles reaches its maximum value $`a`$. Hamiltonian (1) defines a self-bound, interacting many-body system. Energy, total momentum and total angular momentum are conserved quantities. For large numbers of particles the points of interactions are close to a circle of diameter $`a`$ and therefore define a rather thin surface. Therefore, this system is a simple classical model for nuclei or atomic clusters. For finite values of the binding potential the system is amenable to a mean field description . Hamiltonian (1) may also be viewed as a special case of the square well gas with infinite binding potential. However, to the best of our knowledge, the square well gas has not been investigated for such parameter values. In what follows we restrict ourselves to the case of vanishing total momentum and angular momentum. In the limit $`N\mathrm{}`$ the number density diverges for the self-bound many-body system (1,4). A constant density may be obtained once the parameter $`a`$ is rescaled as $`aaN^{1/3}`$, thus turning the Hamiltonian (1,4) into an effective Hamiltonian. In what follows we work with a $`N`$-independent parameter $`a`$. Since the billiard is a scaling system one may easily rescale the results obtained below to adapt for different values of $`a`$. The time evolution of a many-body system with billiard like interactions requires an effort $`𝒪(N\mathrm{ln}N)`$ to be compared with the effort $`𝒪(N^2)`$ for a generic two-body interaction . Initially one computes the $`N(N1)/2`$ times at which pairs of particles may interact and organizes these in a partially ordered binary tree, keeping the shortest time at its root. Immediately after an interaction of particles labelled $`i`$ and $`j`$ one has to recompute $`2N3`$ times corresponding to future interactions between particles $`i`$ and $`j`$ and the remaining ones. The insertion of each new time into the partially ordered tree requires only an effort $`𝒪(\mathrm{ln}N)`$. Between consecutive interactions particles move freely. Upon an interaction of particles labelled by $`i`$ and $`j`$, respectively the momenta change accordingly to $$\stackrel{}{p}^{}=\stackrel{}{p}2\frac{\stackrel{}{p}\stackrel{}{r}}{a^2}\stackrel{}{r}.$$ (5) Here, $`\stackrel{}{p}^{}`$ and $`\stackrel{}{p}\stackrel{}{p}_i\stackrel{}{p}_j`$ are the relative momentum vectors immediately before and after the interaction, respectively, and $`\stackrel{}{r}\stackrel{}{r}_i\stackrel{}{r}_j`$ is the relative position vector with magnitude $`|\stackrel{}{r}|=a`$ at the interaction. Obviously, eq. (5) describes a reflection in the center of mass system of the two interacting particles. We now turn to the computation of the Lyapunov exponents. We describe the used techniques rather briefly since a large body of literature exists on the subject, see e.g. refs.. A Hamiltonian system with $`f`$ degrees of freedom possesses $`f`$ independent Lyapunov exponents $`\lambda _1,\mathrm{},\lambda _f`$ ordered such that $`0\lambda _1\mathrm{}\lambda _f`$. Since the Hamiltonian flow preserves phase space volume there are also $`f`$ non-positive Lyapunov exponents with $`\lambda _j=\lambda _j`$. A system with $`n`$ integrals of motion has $`n`$ vanishing Lyapunov exponents $`\lambda _1=\mathrm{}=\lambda _n=0`$, while a chaotic system has a positive largest Lyapunov exponent $`\lambda _f>0`$. This exponent $`\lambda _f`$ is the rate at which neighbored trajectories diverge under the time evolution. Benettin et al. gave a method to compute the largest Lyapunov exponent from following the time evolution of a reference trajectory and a second one that is initially slightly displaced. The displacement vector has to be rescaled after some finite evolution in a compact phase space. To compute the full spectrum of Lyapunov exponents one has to follow $`f`$ trajectories besides the reference trajectory . This defines $`f`$ independent displacement vectors, and finite numerical precision requires their reorthogonalization besides the rescaling during the time evolution. Rather than following the time evolution of finite displacement vectors one may also use infinitesimal displacements (tangent vectors) in the computation of the Lyapunov exponents. In tangent space the time evolution is given by a linear mapping. Details about the tangent map in high-dimensional billiards can be found in refs. . In a completely chaotic system the KS entropy is given by the sum of all positive Lyapunov exponents , i.e. $`h_{\mathrm{KS}}=_{j=1}^f\lambda _j`$. The KS entropy measures at which rate information about the initial state of a system is lost. ## III Results In what follows we consider the $`N`$-body system at vanishing total momentum and angular momentum. We use units such that $`a=m=E/N=1`$. Times are then given in units of $`a(mN/E)^{1/2}`$. We choose initial conditions at random and follow a trajectory for at least $`10^6`$ collisions. This ensures a good convergence of the numerically computed Lyapunov spectra. We have checked our results as follows: The time evolution was checked by comparing forward with backward propagation; the Lyapunov spectra were checked by comparing the results obtained from the tangent map with those obtained by Benettin’s method involving finite displacements; the computation of all Lyapunov exponents showed that $`\lambda _j+\lambda _j`$ vanishes within our numerical accuracy; we found four pairs of vanishing Lyapunov exponents corresponding to the conserved quantities. The Lyapunov spectra for systems of sizes $`N=10,30,100,300`$ particles are plotted in Fig. 1. We note that the $`N`$-body system possesses $`2N4`$ positive Lyapunov exponents. This shows that there are no further integrals of motion besides energy, momentum and angular momentum, and that truly high-dimensional chaos is developed. We discuss this finding in detail in the following section. The Lyapunov exponent $`\lambda _i`$ is a smooth function of its index with a rather small smallest positive Lyapunov exponent $`\lambda _5`$. This behavior is similar to the case of the Lennard-Jones fluid or the Fermi-Pasta-Ulam model but differs from the hard sphere gas where a rather large smallest positive Lyapunov exponent was found . Note that the spectra seem to converge somehow with increasing $`N`$. Table I displays the largest and smallest positive Lyapunov exponents, collision rates and the KS entropies. It is interesting to examine the $`N`$-dependence in more detail. Fig. 2 shows that the KS entropy $`h_{\mathrm{KS}}`$ and the collision rate $`\tau ^1`$ depend linearly on the system size $`N`$. The case of the collision rate is easily understood since the constant single particle energy keeps the collision rate of each particle with the surface constant, too. The KS entropy is roughly given by the area under the corresponding spectrum presented in Fig. 1. Since the spectra converge approximately with increasing $`N`$ this area increases linearly with the number of particles. The $`N`$-dependence of the largest Lyapunov exponent $`\lambda _{2N}`$ is shown in Fig. 3 and may be approximated by an logarithmicly increasing curve. In the case of the hard sphere gas the $`N`$-dependence could be understood for sufficiently low densities using techniques borrowed from kinetic theory . Unfortunately, these ideas can not directly be transferred to our system since the density is not a small parameter. Note however, that the largest Lyapunov exponent decreases with increasing $`N`$ once the density is kept constant after rescaling $`aaN^{1/3}`$. This is interesting with view on nuclear physics since this result differs qualitatively from simple billiard (mean-field) models. Scaling arguments for such models show that the largest Lyapunov exponent increases with $`N`$ at constant density and single-particle energy. The numerical results obtained in this work indicate that the considered billiard systems exhibit truly high-dimensional chaos. We recall that the system is convex and does not possess any dispersing elements. Furthermore, it differs in construction from the high-dimensional focusing billiards with spherical caps studied in refs.. Thus, a closer examination of the chaos generating defocusing mechanism is of interest and presented in the following section. ## IV Defocusing mechanism Let us examine the defocusing mechanism in the billiard considered in this work. We do not try to proof that the considered system is completely chaotic – which seems difficult at least – but rather want to understand the numerically observed phenomenon of chaotic motion in more detail. To this purpose and based on our numerical results we assume that the system is (predominantly) chaotic, and that chaos is generated by the only possible mechanism, namely defocusing . We may then clarify how high-dimensional chaos develops and thus understand why we observe $`2N4`$ positive Lyapunov exponents. This investigation may hopefully serve also as a starting point and a motivation for further research. For simplicity let us consider the three-body system first. It is useful to study this system as a billiard in full six-dimensional configuration space. This is possible since the change in relative momentum (5) caused by an interaction of two particles corresponds to a specular reflection in the billiard. We denote vectors in configuration space by capital letters as $`\stackrel{}{R}=(\stackrel{}{r}_1,\stackrel{}{r}_2,\stackrel{}{r}_3)`$, where $`\stackrel{}{r}_i=(x_i,y_i)`$ is the two-dimensional position vector of the $`i^{\mathrm{th}}`$ particle. The part of the boundary where particles labelled $`i=1,2`$ interact may be parametrized as $`\stackrel{}{X}_{(12)}`$ $`=`$ $`(\stackrel{}{r}+{\displaystyle \frac{a}{2}}\stackrel{}{e}_\alpha ,\stackrel{}{r}{\displaystyle \frac{a}{2}}\stackrel{}{e}_\alpha ,\stackrel{}{r}_3),`$ (6) $`\stackrel{}{e}_\alpha `$ $`=`$ $`(\mathrm{cos}\alpha ,\mathrm{sin}\alpha ).`$ (7) The (outwards pointing) normal vector $`_a\stackrel{}{X}_{(12)}`$ and the tangent vector $`_\alpha \stackrel{}{X}_{(12)}`$ span the two-dimensional planes where divergence due to defocusing might be generated. These planes come in a four-dimensional family due to the parameters $`\stackrel{}{r}`$ and $`\stackrel{}{r}_3`$ in eq. (6). Basis vectors for these planes may be chosen as $`\stackrel{}{E}_1`$ $`=`$ $`((1,0),(1,0),(0,0))/\sqrt{2},`$ (8) $`\stackrel{}{E}_2`$ $`=`$ $`((0,1),(0,1),(0,0))/\sqrt{2}.`$ (9) Similar arguments show that there are two further planes where defocusing might be generated corresponding to interactions between particles $`(1,3)`$ and $`(2,3)`$, respectively. These planes are spanned by the basis vectors $`\stackrel{}{E}_3`$ $`=`$ $`((1,0),(0,0),(1,0))/\sqrt{2},`$ (10) $`\stackrel{}{E}_4`$ $`=`$ $`((0,1),(0,0),(0,1))/\sqrt{2}`$ (11) and $`\stackrel{}{E}_5`$ $`=`$ $`((0,0),(1,0),(1,0))/\sqrt{2},`$ (12) $`\stackrel{}{E}_6`$ $`=`$ $`((0,0),(0,1),(0,1))/\sqrt{2},`$ (13) respectively. Four of the six basis vectors $`\stackrel{}{E}_i`$ are linearly independent. The vectors $`\stackrel{}{X}=((1,0),(1,0),(1,0))`$ and $`\stackrel{}{Y}=((0,1),(0,1),(0,1))`$ correspond to displacements of the center of mass and are orthogonal to the vectors $`\stackrel{}{E}_i`$. This is expected since the center of mass motion moves freely. It is important to note that the boundary is neutral (i.e. neither focusing nor dispersing) in the transverse directions. It is straight forward to generalize these considerations to $`N`$ bodies. In the case of the $`N`$-body billiard there are $`N(N1)/2`$ families of two-dimensional planes where defocusing might possibly occur. These families are related by those permutations that involve two out of $`N`$ particles, i.e. transpositions. $`2(N1)`$ out of the $`N(N1)`$ basis vectors $`\stackrel{}{E}_i`$ are linearly independent. The two vectors corresponding to the displacement of the center of mass are orthogonal to the vectors $`\stackrel{}{E}_i`$. It would be interesting to relate the number of positive Lyapunov exponents and the number of linearly independent basis vectors $`\stackrel{}{E}_i`$. Clearly, the former cannot exceed the latter. Assume that defocusing causes divergence in the directions of all linearly independent $`\stackrel{}{E}_i`$. Then there would be exactly $`2(N1)`$ positive Lyapunov exponents. However, the conservation of energy and angular momentum puts two additional constraints, and $`2N4`$ is the number of positive Lyapunov exponents. This reasoning is consistent with the numerical results presented in the previous section. The following picture thus arises. The boundary of the billiard considered in this work consists of several equivalent elements each of which cause a reflected trajectory to diverge only in a two-dimensional plane. The orientation of this plane is determined by the reflecting boundary element. In transverse directions the reflection is neutral, i.e. neither focusing nor dispersing. A trajectory that gets reflected from sufficiently many different boundary elements may exhibit divergence in all directions. It is interesting to note that this mechanism differs from the one investigated by Bunimovich et al. . The neutral behavior in the transverse directions has the advantage that it avoids the problems caused by the weak convergence occurring in the transversal directions upon reflections from higher-dimensional spherical caps. It has the disadvantage that several focusing elements are needed to produce high-dimensional chaos while a single spherical cap may be sufficient. ## V Conclusions We have computed the Lyapunov spectrum and the KS entropy for an interacting $`N`$-body system in two spatial dimensions which is realized as a convex billiard in $`2N`$-dimensional configuration space. In presence of four conserved quantities we find the maximal number of $`2N4`$ positive Lyapunov exponents. Thus, the system exhibits high-dimensional chaos. At fixed single particle energy the largest Lyapunov exponent grows with $`\mathrm{ln}N`$ while the KS entropy grows and the collision rate increase linearly with $`N`$. In an attempt to understand the chaotic nature of the billiard we have identified several symmetry related two-dimensional planes where defocusing might be generated. Their number and orientation in configuration space is such that a long trajectory may exhibit divergence in $`2N4`$ directions of phase space. This mechanism of focusing differs from the one proposed recently by Bunimovich and Rehacek. Let us finally comment on chaos in realistic many-body systems. Though the considered model is a crude approximation of realistic self-bound many-body systems like nuclei or clusters it incorporates the important ingredient of an attractive two-body interaction that acts mainly at the surface of the system. This is the basic picture we have for nuclei and clusters, where the complicated two-body force creates a rather flat mean-field potential, and particles experience mainly a surface interaction. However, unlike in the model system, an interaction at the nuclear surface involves more than just two nucleons, and the two-dimensional planes where defocusing is generated in the model system are replaced by some higher-dimensional ones. This might also introduce the problem caused by weak focusing in transverse directions. It is fair to assume that truly high-dimensional chaos may develop upon several collisions with the surface. Though the detailed analysis seems much more complicated than in the studied model system, the basic picture developed in this work should be applicable to some extend also in the case of more realistic two-body interactions.
no-problem/9909/cond-mat9909379.html
ar5iv
text
# Fast Response and Temporal Coding on Coherent Oscillations in Small-World Networks ## Abstract We have investigated the role that different connectivity regimes play on the dynamics of a network of Hodgkin-Huxley neurons by computer simulations. The different connectivity topologies exhibit the following features: random connectivity topologies give rise to fast system response yet are unable to produce coherent oscillations in the average activity of the network; on the other hand, regular connectivity topologies give rise to coherent oscillations and temporal coding, but in a temporal scale that is not in accordance with fast signal processing. Finally, small-world (SW) connectivity topologies, which fall between random and regular ones, take advantage of the best features of both, giving rise to fast system response with coherent oscillations along with reproducible temporal coding on clusters of neurons. Our work is the first, to the best of our knowledge, to show the need for a small-world topology in order to obtain all these features in synergy within a biologically plausible time scale. In a recent letter by Watts and Strogatz it was shown that small-world networks enhance signal-propagation speed, computational power, and synchronizability. Small-world stands for a network whose connectivity topology is placed somewhere between a regular and a completely random connectivity. The main properties of these specific networks are that they can be highly clustered like regular networks and, at the same time, have small path lengths like random ones. Therefore, small-world networks may have properties given neither in regular nor in random networks . In this letter we have extended Watts and Strogatz’s general framework by introducing dynamical elements in the network nodes. Our source of inspiration is based on a phenomena observed in the olfactory antennal lobe (AL) of the locust discovered by Gilles Laurent and collaborators . The AL is a group of around 800 neurons whose functional role is to relay information from the olfactory receptors to higher areas of the brain for further processing. Three main features have been observed in the dynamics of the AL. First, there is a fast response of the AL when the stimulus is presented. Second, when an odour is presented to the insect, coherent oscillations of 20 Hz in the local field potential (LFP) are measured . Third, every neuron responds to the odour with some particular timing with respect to the LFP . Summarizing: fast response of coherent oscillations along with temporal coding are observed. There are also other systems in the brain that present coherent LFP oscillations, hence, hinting to the generality of these phenomena (see for a review). The cooperative behavior of large assemblies of dynamical elements has been the subject of many investigations. In all of them the connectivity between the elements of the network was either regular (local or global all-to-all), or random. However, none of these studies incorporates a comparative analysis of network dynamics for all the different connectivity topologies. In the present work we pretend to show that in order to provide fast response, coherent oscillations and temporal coding a small-world topology is required. We will show that the regular connectivity topology provides a slow response to the external input. Although it is able to produce temporal coding and coherent oscillations, the time of formation of the oscillations would imply much slower responses than those observed in biological systems. On the other hand, for the completely random connectivity case the responsiveness of the system is highly increased and temporal variations in clusters activity are present, but the coherent oscillations tipically observed in the LFP are lost. Without these coherent oscillations the AL seems to lose its ability to process the information incoming from the sensors . The model we propose for this study is made of an array of non-identical Hodgkin-Huxley elements coupled by excitatory synapses. The unit dynamics is described by the following set of coupled ordinary differential equations: $$C_m\dot{V}_i=I^e(t)g_L\widehat{V}_Lg_{Na}m^3h\widehat{V}_{Na}g_Kn^4\widehat{V}_K+I^s(t)$$ (1) $$\dot{m}=\alpha _m(V)(1m)\beta _m(V)m$$ (3) $$\dot{h}=\alpha _h(V)(1h)\beta _h(V)h$$ (4) $$\dot{n}=\alpha _n(V)(1n)\beta _n(V)n$$ (5) where $`V_i`$ represents the membrane potential of unit $`i`$; $`C_m`$ is the membrane capacitance per unit area; $`I^e(t)`$ is the external current, which occurs as a pulse of amplitude $`I_0`$; $`I^s(t)`$ is the synaptic current; $`\widehat{V}_r=V_iV_r`$, where $`V_r`$ are the equilibrium potentials for the different ionic contributions ($`r=L,Na,K`$), and $`g_r`$ are the corresponding maximum conductances per unit area; $`h`$, $`m`$, $`n`$ are the voltage dependent conductances; and $`\alpha `$, $`\beta `$ are functions of $`V`$ adjusted to physiological data by voltage clamp techniques. We have used the original functions and parameters employed by Hodgkin and Huxley . The system was integrated using the Runge-Kutta 6(5) scheme with variable time step based on . The absolute error was $`10^{15}`$ and the relative error was $`10^7`$ in all the calculations presented in this letter. The synaptic current $`I^s`$ is given by $$I_i^s(t)=g_{ij}r_j(t)[V_s(t)E_s]$$ (6) where $`i`$ stands for the index of the neuron that receives the synaptic input, $`j`$ is the neuron from which the synaptic input is received, and $`g_{ij}`$ is the maximum conductance, which determines the degree of coupling between the two connected neurons. $`V_s`$ is the postsynaptic potential, $`E_s`$ is the synaptic reversal potential and $`r_j(t)`$ is the fraction of bound receptors computed following the method and parameters described by Destexhe et al. . Namely, the dynamics of the bound receptors $`r`$ is given by the equation: $$\dot{r}=\alpha [T](1r)\beta r$$ (7) where $`[T]`$ is the concentration of the transmitter, and $`\alpha `$, $`\beta `$ are the rise and decay constants, respectively. In this model three different kinds of connectivity patterns have been tested: regular, random and small world. To interpolate between regular and random networks we follow the procedure described by Watts and Strogatz which we summarize here for convenience: we start from a ring lattice with $`N`$ vertices and $`k`$ edges per vertex, and each edge is rewired at random with probability $`p`$. The limits of regularity and randomness are for $`p=0`$ and $`p=1`$ respectively, and the small-world topology lies somewhere in the intermediate region $`0<p<1`$. The quantification of the structural properties of these graphs is performed, following Watts and Strogatz , using their characteristic path length $`L(p)`$ and their clustering coefficient $`C(p)`$. $`L(p)`$ is defined as the number of edges in the shortest path between two vertices, averaged over all pairs of vertices. $`C(p)`$ is defined as follows: suppose that a vertex $`v`$ has $`k_v`$ neighbours; then at most $`k_v(k_v1)/2`$ edges can exist between them. Let $`C_v`$ denote the fraction of these allowable edges that actually exist, and define $`C`$ as the average of $`C_v`$ over all vertices $`v`$. Fig. 1a replicates that of Watts and Strogatz for ease of reference and to verify our computations. Next we investigate the functional significance of SW topologies for the dynamics of the network. Watts and Strogatz already note that small-world networks of coupled phase oscillators synchronize almost as readily as in the mean-field model, despite having orders of magnitude fewer edges. To study the global behavior of the network we compute its average activity $`\overline{V(t)}=(1/N)_{i=1}^NV_i(t)`$. The quantity that we use to detect the onset and degree of coherent oscillations is the average activity oscillation amplitude , defined by $$\sigma ^2(p)=\frac{1}{T_2T_1}_{T_1}^{T_2}[\overline{V_p(t)}_t\overline{V_p(t)}]^2𝑑t$$ (8) where $`\overline{V_p(t)}`$ is the average activity of the network for a given value of the probability $`p`$, and the angle brackets denote temporal average over the integration interval. A high value of $`\sigma (p)`$ would imply a high amplitude of the oscillations of the average activity, while a low value would indicate an almost non-oscillatory behavior. In Fig. 1b we plot $`\sigma (p)`$ for each of the different networks characterized by its probability p. Notice that coherent oscillations increase in the region in which a high $`C(p)`$ and a low $`L(p)`$ occur simultaneously; this is precisely the SW region. This can be better observed in Fig. 2, which shows the average activity of the network in three cases corresponding to the three different topological configurations: regular, random and SW. Both the regular and the SW topologies display coherent oscillations, but in the regular network they appear much later and their amplitude is smaller than in the SW case. On the other hand, the random network only displays irregular variations over an almost constant pattern of activity. A more extensive study in the $`(k,p)`$ plane has been performed in order to stablish the limits for the appearance of coherent oscillations and to check that our previous results can be generalized within a certain range of parameters. We have computed the average activity oscillation amplitude for a total of $`180`$ points in the $`(k,p)`$ plane, taking an integration interval between $`T_1=100`$ and $`T_2=200`$. An interpolation of these results is plotted in Fig. 3, where the clear zones indicate high values of $`\sigma `$. We can conclude from this figure that fast coherent oscillations appear only in the region of intermediate probabilities, that is, the SW. The a priori limits on $`k`$ are based on the fact that for $`k`$ lower than $`10`$ the activation of the network is very weak, while for $`k`$ higher than $`35`$ some neurons become saturated. Having shown the necessity of a SW network to obtain a fast response along with coherent oscillations of the average activity, we proceed to check the ability of the network to produce a temporal codification of the information contained in the stimulus, and the robustness of the network response to the introduction of noise in the input. In temporal coding, information is represented by the timing of action potentials with respect to an ongoing collective oscillatory pattern of activity. For instance, when an odour is presented, every neuron in the AL responds to the odour with some particular timing with respect to the LFP . As a measure of this temporal coding, we have divided time in periods of the global average activity, and calculated for each period the quantity: $$A_i(n)=\frac{1}{C}_T[a_i(t)\overline{V(t)}]^2𝑑t$$ (9) where $`i`$ represents a particular cluster, $`n`$ a particular period of the mean activity $`\overline{V(t)}`$ of the whole network, $`a_i(t)`$ is the mean activity of cluster $`i`$, and $`C`$ is an appropiate normalization constant used to get the final value of $`A_i(n)`$ in the range $`01`$. We would expect the coding to be different for each cluster. In Fig. 4 we show the results for three different clusters chosen at random in a network within the SW connectivity regime. It can be observed that the activities of the different clusters are out of phase and reach their maximum values at different periods of the global average activity. A system with such a coding can rapidly and efficiently perform computations that are essential to pattern recognition, and that are much more difficult to perform in a rate-coding framework . Following this coding scheme, information about an odour is contained not only in the neural assembly active at each oscillation cycle, but also in the precise temporal sequence in which these assemblies are updated during the response to the odour. Temporal coding thus allows combinatorial representations in time as well as in space . Lastly, in order to check the robustness of the network response, we have computed correlations between the activities of a given cluster in five different realizations of the simulation. We have introduced a gaussian uncorrelated noise to the external input: $`I^e=I_0+\sigma ϵ`$, where $`\sigma `$ is the outcome of a normal distibution and $`ϵ`$ is the noise level. The correlations are calculated as follows: $$C_i^{(rs)}(ϵ)=\frac{_nm_i^{(r)}(n)m_i^{(s)}(n)}{\sqrt{_nm_i^{(r)}(n)^2_nm_i^{(s)}(n)^2}}$$ (10) where $`r`$ and $`s`$ correspond to different realizations of the simulation, $`i`$ is the cluster, the sums are over all periods $`n`$, and the dependence on the noise is implicit in the right side of the equation. We have defined the quantity $`m_i^{(r)}(n)=A_i^{(r)}(n)A_i^{(r)}`$, with $`A_i^{(r)}(n)`$ the magnitude from Eq. (9), for a particular realization $`r`$. The angle brackets denote average over all periods $`n`$. An average of the correlation curves for different pairs of realizations is plotted in Fig. 5. There are two clearly different regions in the graphic: for small amounts of noise the correlation is almost $`1.0`$; whereas for a higher noise level the correlation jumps to a lower value. The discontinuity appears at a level of noise of approximately $`0.1`$ percent. In conclusion, a variety of possible network topologies has been investigated. Each one gives rise to different dynamical properties. Regular networks produce coherent oscillations and temporal coding in a slow time scale; whereas random networks give rise to fast response but without coherent oscillations. We have introduced new results on small-world networks, showing that both coherent oscillations and temporal coding can coexist in synergy in a fast time scale. At the onset of this research project, intrigued by the olfactory AL dynamics, we were searching for a dynamical system with precisely the properties just described for the SW networks. Hence the work here reported is not only interesting from the dynamical systems point of view, but it is also relevant for the understanding of biological systems. We want to acknowledge Gilles Laurent, Alex Bäcker, Maxim Bazhenov, Misha Rabinovich and Henry Abarbanel for very insightful discusions. We thank the Dirección General de Enseñanza Superior e Investigación Científica for financial support (PB97-1448). We thank the Universidad Autónoma de Madrid for a graduate fellowship to L. F. L. and the Centro de Computación Científica (UAM) for the computation time.
no-problem/9909/cond-mat9909454.html
ar5iv
text
# The Effect of Splayed Pins on Vortex Creep and Critical Currents \[ ## Abstract We study the effects of splayed columnar pins on the vortex motion using realistic London Langevin simulations. At low currents vortex creep is strongly suppressed, whereas the critical current $`j_c`$ is enhanced only moderately. Splaying the pins generates an increasing energy barrier against vortex hopping, and leads to the forced entanglement of vortices, both of which suppress creep efficiently. On the other hand splaying enhances kink nucleation and introduces intersecting pins, which cut off the energy barriers. Thus the $`j_c`$ enhancement is strongly parameter sensitive. We also characterize the angle dependence of $`j_c`$, and the effect of different splaying geometries. \] The enhancement of critical currents in superconductors through irradiation by heavy ions is well established . The ions create extended columnar defects that localize individual vortices much more effectively than naturally occurring point defects. It was suggested by T. Hwa et al. that pinning could be further improved by splaying these columnar defects. In recent experiments , splayed pinning has been created directly through multiple-step irradiation of the sample, or indirectly by irradiation through thin foils. Up to an order of magnitude enhancement of the critical current relative to parallel columnar pins has been reported . Columnar pins which are aligned with the field direction allow a flux line to sit in a potential minimum in every layer without having to pay any elastic energy. There are no competing tendencies which favor point over columnar pinning, and hence there is a clear expectation that columnar pinning will greatly enhance the critical current, as is universally born out experimentally. That splay should further enhance $`j_c`$ seems much less compelling. It has been argued that a vortex hopping between two splayed pins experiences a linearly increasing energy cost, whereas once a hop originates between columnar pins there is no further cost for completion of the jump. This leads to a suppression of motion for splayed pins. An additional mechanism for enhancement of $`j_c`$ that we suggest is an increase in collective pinning effects due to greater vortex entanglement in the presence of splayed pins. This second mechanism may be more effective than the first since, although the distance between two splayed defects increases as one moves away from their point of closest approach, the separation between the pins may be very small at this closest point. Thus one could argue that splaying leads to an increased nucleation of hopping. Furthermore, the increasing energy cost which exists for two splayed pins can be cut off by the additional pins that are present. Set against the suggestions that splay enhances pinning is also the fact that splaying undermines the key feature which made columnar pinning so effective in the first place. Vortices must elongate in order to take advantage of the pins, though this wandering is not random, as for point pinning. Even accepting the increasing energy cost for hopping and entanglement arguments, it is not obvious that they will dominate over the very special topological effectiveness of columnar pinning. The uncertain consequences of this set of competing effects is reflected experimentally in the fact that the enhancement of pinning by splay is sensitive to the details of the splay, including whether the angular distribution is Gaussian or bimodal, the splay angle, and the material used. In spite of the importance of splay for critical current enhancement, no realistic numerical simulations of vortices interacting with splayed pinning have been reported up until this time. In this paper we present the first analysis of the effects of splayed pinning using overdamped molecular dynamics simulations. By examining the distribution of lengths of pinned vortex segments between kinks, we explicitly verify the conjecture that the energy cost for vortex motion increases for intermediate length segments in the presence of splay, as a two pin argument suggests. We refine this picture in a crucial way by demonstrating how this growth is cut off by intersections with additional pins. This measure at the same time shows an enhancement of short length segments (nucleation events). By comparing splay which is in the plane of the Lorentz force with splay that is orthogonal to it, we can also address the effect of entanglement on pinning. We find that the combination of entanglement and confinement can lead to a suppression of creep, but only in a somewhat limited parameter range. Finally, we emphasize the need to distinguish suppression of vortex creep and suppression of the critical current when determining the efficacy of different pinning configurations. We conduct overdamped molecular dynamics simulations using a London-Langevin model. We refer the reader to for details. The key feature of the approach is the incorporation of forms for the vortex-vortex interaction, elastic bending, and thermal Langevin forces, which use experimental values for the coherence and penetration lengths, and anisotropy parameter, and hence allows us to simulate the system without a host of adjustable parameters. In we verified that our choices accurately reproduce the experimental phase diagram. The current simulations are performed at $`T=77K`$, inside the glassy phase as evidenced by the creep-like E-J curves below. The samples are of size $`106\lambda \times 106\lambda `$, containing 49 vortices extending through 80 layers. The pinning potential representing irradiated defects is modeled by short-range attractive parabolic wells of radius $`r_p=0.5\lambda `$ which are spatially correlated on neighboring layers to form columnar pins. The wells in different layers belonging to a single column have the same pinning energy $`U_p^i`$, with $`U_p^i`$, selected from a Gaussian distribution with mean $`U_p=0.08`$ and standard deviation $`\sigma _p=0.012`$. With this choice the vortex depinning transition falls near $`j/j_0=0.08`$ (where $`j_0`$ is the BCS depairing current), and the magnitude of the pinning force is on average of the same order as the elastic and Lorentz forces. We consider several different pinning geometries: parallel columnar pins, in which the columns are aligned with the $`z`$ axis; transverse bimodal splay, in which each pin is tilted at angles $`\pm \theta `$ from the $`z`$ axis in the plane transverse to the direction of vortex motion; longitudinal bimodal splay, in which the tilt plane is spanned instead by the direction of vortex motion and the $`z`$ axis; Gaussian splay, in which $`\theta `$ is selected from a Gaussian distribution centered about zero and the tilt direction is uniformly distributed in the $`x`$$`y`$ plane; and point pins, which are not spatially correlated between layers. We first study single vortex phenomena by performing simulations in which the number of vortices is less than the number of pins, $`N_v<N_p`$. In Fig. 1, we compare two samples with an equal density of columnar pins. Here the dimensionless resistivity, $`\rho =(E/j)/\rho _{BS}`$ is defined in units of $`\rho _{BS}`$, the Bardeen-Stephen resistivity. $`E`$ is the electric field and $`j`$ is the current density. In the first sample the pins are aligned parallel to the $`z`$ axis, and in the second the pins are splayed at $`\theta =\pm 5.7^{}`$ from the $`z`$ axis, transverse to the direction of vortex motion. We find a strong suppression of the creep of vortices by splay in the low current regime. While the dynamic range is limited, when a creep type exponential fit, $`E\mathrm{exp}(1/j^\mu )`$, is performed, it is consistent with an increased value of $`\mu `$. The effect of splaying decreases with increasing current. The critical current $`j_c`$ is typically defined via a threshold criterion $`\rho (j_c)=\rho ^\mathrm{t}`$. Choosing low thresholds we clearly observe an enhancement of the critical current. For example, using $`\rho ^\mathrm{t}=10^4`$, $`j_c`$ is enhanced by 20$`\%`$. At higher thresholds this enhancement is reduced, however, finally disappearing for $`\rho ^\mathrm{t}>0.05`$. Now we analyze the physical mechanisms at work in the presence of splay. Single vortex phenomena dominate in the dilute vortex limit, considered here. At low applied currents, $`j/j_0<0.1`$, vortices move between pins by thermally activated double kinks . For parallel columnar pins, extending an already formed double kink does not cost extra energy \[Fig. 1(a), simulation image\]. For splayed pins with a bimodal distribution, double kinks between pairs of pins tilted in opposite directions \[Fig. 1(b), simulation image\] experience an increasing, or confining energy barrier, since after the nucleation of the kink the unpinned vortex segment must keep growing longer in the high energy region between the pins. On the other hand, pairs of pins tilted in the same direction with respect to the $`z`$ axis are twice as far apart on average than in the columnar case, so the nucleation of double kinks bridging parallel pins must span twice as long a distance, and hence costs more energy. We extract the energy barrier against the spreading of kinks by measuring the distribution function, $`P(l_k)`$, of the lengths $`l_k`$ of the vortex segments between kinks. If the kink energy does not depend on its length, as is expected for columnar pins, $`P(l_k)`$ should be roughly uniform for pins of equal depth, and exhibit slow decay with length for pins with differing depths. If the kink energy grows linearly with length, as expected for splayed pins, $`P(l_k)`$ should fall off at large $`l_k`$ exponentially. In Fig. 2 we show $`P(l_k)`$ for $`j/j_0=0.065`$. The columnar distribution is rather uniform, whereas the splayed one exhibits a significant enhancement at small $`l_k`$ relative to the columnar case, followed by a rapid fall in the intermediate region, and a slow decay at the large $`l_k`$ regime. The enhancement at small $`l_k`$’s has a simple explanation: because of the splaying, the minimal distance between the pins is much smaller than for the columnar case. Therefore the nucleation of the double kinks which takes place here, and the formation of short segments, is much enhanced. The subsequent fall of $`P(l_k)`$ at intermediate $`l_k`$ is consistent with an exponential, supporting the picture of a linearly increasing potential barrier. At large values $`l_k13\xi `$, however, the decay of $`P(l_k)`$ is slowed down, and $`P(l_k)`$ tracks the columnar distribution. This can be attributed to the interference of additional pins. The high energy segment between two tilted pins increases only until one of the kinks reaches a third pin intersecting the second pin onto which the vortex is hopping. In the present bimodal distribution this third pin is parallel to the first one. Thus, as the kinks slide further, the length of the high energy segment remains unchanged, in complete analogy to the columnar case. To consider many-vortex phenomena, we move to the high density limit, $`N_v>N_p`$. For $`N_v=2N_p`$, half of the vortices are directly pinned, and the rest are pinned only by the repulsion of their pinned neighbors. At low driving currents, only the latter, interstitial vortices move. The flowing interstitial vortices are roughly aligned with the applied magnetic field, whereas the pinned vortices are tilted. Thus the interaction between these two types of vortices results in their entanglement. Fig. 3 shows the result of our simulations. First, the critical current is much reduced compared to the case $`N_v<N_p`$, due to the fact that the interstitial vortices depin much more easily than their pinned neighbors. Second, when $`j_c`$ is defined using the same threshold resistivity as before, $`\rho ^\mathrm{t}=10^4`$, we observe a factor of 2 enhancement in $`j_c`$ by splay, compared to the 20$`\%`$ seen in Fig. 1. This strongly suggests that the forced entanglement of vortices is capable of impressive additional enhancements of the critical current. Here we are in the position to check whether the physics of splayed pins is “adiabatically connected” to the columnar case. It is expected that either a single vortex is still sticking to a single pin, after it has been splayed, or that several vortices will pin to a column in such a way that columns are either (nearly) fully occupied, or (nearly) completely empty, forming vortex-like “quasi lines” . We tested these expectations by determining the distribution function of percentage-wise occupation of the splayed pins. Remarkably, this distribution function shows sharp peaks around zero and 100$`\%`$ occupancy, with very suppressed values in between, verifying the expectations. At small angles this is mostly due to vortices still sticking to a single pin. We close this section by mentioning that the enhancement of $`j_c`$ occurred only for a rather limited range of the model’s parameters; for large regions of the parameter space we found either minimal effects, or the reduction of $`j_c`$ upon splaying as the magnetic field was further increased. This shows the powerful influence of the enhanced kink nucleation and cutoff of the confining potential by third pins. Experimentally up to tenfold $`j_c`$ enhancements were reported , exceeding but comparable to our results. This enhancement is strongly dependent on the magnetic field and temperature, however, often diminishing to small values, or even turning into a reduction instead. This is consistent with our finding of the importance of nucleation and intersecting pin effects. Next we return to samples with $`N_v<N_p`$, and explore the dependence of $`j_c`$ on the angle of splaying $`\theta `$. The inset of Fig. 2 displays $`j_c(\theta )`$. For small angles $`\theta <10^{}`$, $`j_c`$ increases due to the increased energy barrier to kink spreading with increasing angle. Around $`\theta 10^{}`$ $`j_c`$ exhibits a maximum. It decreases smoothly for larger angles, as the vortices cease to accommodate to the pins in order to stay aligned with the magnetic field . This nonmonotonic behavior of $`j_c`$ is consistent with experiments . Experimentalists have investigated several different pinning configurations . To make contact with these studies, we have measured the resistivity of samples with equal numbers of pinning elements placed in the following arrangements: uncorrelated point-like pinning, Gaussian splayed pinning, columnar pinning, as well as longitudinal and transverse bimodal splayed pinning. As shown in Fig. 4, we find the highest resistivity for point pins, lower for columnar, and the lowest for bimodal transverse splay. We find that the Gaussian splay produces an enhancement of the creep relative to columnar pins, which is consistent with experiments . Among the bimodal splay configurations, transverse splay suppresses creep more than longitudinal splay, again in agreement with experiments . Transverse splay is more effective than longitudinal, because it forces the entanglement of vortices more effectively. Also, longitudinal splay brings the pins closer in the direction of vortex motion, thus helping the nucleation of the double kinks. Finally we return to the current dependence. As the applied current increases to values approaching the depinning regime, the vortices are pinned less effectively and spend more time between pins. The vortex motion is no longer dominated by kinks, weakening the single vortex arguments. Also, forced entanglement is no longer effective at preventing vortex motion near the depinning regime, since when a vortex begins to move, it is increasingly likely that its forward neighbor is already moving, or that the push from behind is sufficient to start its motion. Thus instead of a forced entanglement, both vortices move together. For all these reasons, the creep suppression by splay decreases with increasing currents, and completely disappears in the depinning regime. In conclusion, we have used realistic London Langevin simulations to study the motion of vortices in the presence of splayed columnar defects. We found that splaying introduces a confining potential against vortex hopping. It also enhances kink nucleation and introduces third pins, however, cutting off this linear potential. We also established the importance of forced entanglement. Finally we analyzed the angle dependence of the critical current, and compared different splaying geometries. Several of our results compare favorably to experiments. We thank G. Crabtree, N. Grønbech-Jensen, T. Hwa, L. Krusin-Elbaum, W. Kwok, P. Le Doussal, D. Nelson, and V. Vinokur for useful discussions. Funding was provided by NSF-DMR-028535, and the CLC and CULAR grants, administered by the University of California.
no-problem/9909/astro-ph9909041.html
ar5iv
text
# Spectroscopic analysis of a super-hot giant flare observed on Algol by BeppoSAX on 30 August 1997 ## 1 Introduction Algol-type systems are short-period eclipsing binaries composed of an early-type primary and a (near) main-sequence late-type secondary. The short orbital period ensures that both stars are tidally-locked fast rotators. The secondaries in these systems show strong magnetic activity with copious X-ray emission. While tidally-induced activity is a common characteristic in RS CVn-type systems, the early-type primary in Algol-type systems, lacking a surface convective envelope, is not expected to be able to sustain a dynamo. Therefore the coronal structures should be concentrated on the secondary only, making for a simpler geometry and avoiding the complication of cross-system loop structures which may exist in the case of RS CVn-type binaries. Because of its proximity (28.46 pc on the basis of the Hipparcos parallax, ESA (1997)) the eponymous system Algol is one of the apparently strongest coronal X-ray sources in the sky. Its brightness ($`V2.7`$) soon made the periodic fading obvious (it was first reported, in the western world, by Geminiano Montanari in 1667), and their interpretation in terms of mutual eclipses in a binary system was already proposed by John Goodricke in 1782. The system consists of a B8 V primary and a K2 IV secondary (plus a more distant tertiary component, with a period of $`1.8`$ yr and a spectral type A or F). Hill et al. (1971) report values for the masses and radii of the two components $`R_\mathrm{A}=3.0R_{}`$, $`M_\mathrm{A}=3.7M_{}`$ and $`R_\mathrm{B}=R_\mathrm{K}=3.4R_{}2.4\times 10^{11}`$ cm, $`M_\mathrm{B}=0.8M_{}`$, while Richards (1993) reports $`R_\mathrm{A}=2.90R_{}`$, $`M_\mathrm{A}=3.7M_{}`$ and $`R_\mathrm{B}=R_\mathrm{K}=3.5R_{}2.5\times 10^{11}`$ cm, $`M_\mathrm{B}=0.81M_{}`$; the orbital inclination is reported to be $`i=81.4`$ deg. We will adopt the Richards (1993) parameters in the following. The orbital period is $`2.8673`$ d. The ephemeris we have adopted in the present work is HJD 2 445 739.0030 + 2.8673285 $`E`$ (Kim (1989); Al-Naimiy et al. (1985)). The separation is 14.14 $`R_{}`$, or $`4`$ times the radius of the K star. Algol was identified as an X-ray source already in the ’70s with observations from the SAS 3 satellite (Schnopper et al. (1976)) and its soft X-ray emission was confirmed with sounding rocket flights (Harnden et al. (1976)). Its intense activity level has made it a target of choice for most UV, EUV and X-ray observatories. The high level of X-ray emission was initially interpreted in the framework of the mass-transfer paradigm, given the evidence from optical data of mass transfer taking place between the two components (see Richards (1993) and references therein). However, spectroscopic observations soon showed a hot, thermal X-ray spectrum, requiring the presence of magnetically confined plasma, i.e. of a corona, expected to be located on or around the K-type secondary. The soft X-ray emission of Algol is characterized by the frequent occurrence of major flares. Almost all sufficiently long observations to date have yielded a significant flaring event, with EXOSAT (White et al. (1986); van den Oord & Mewe (1989)), GINGA (Stern et al. (1992)) and ROSAT (Ottmann & Schmitt (1996)) all observing long-lasting flares (with effective decay times ranging between 5 and 20 ks), which have been extensively discussed in the literature. The above flares have all been analyzed in a similar way, allowing for a reasonably homogeneous comparison of their characteristics to be made. In particular, the observed decay has in all cases been used to derive (following the formulation of van den Oord & Mewe (1989)) the length, and consequently, the average density of the flaring plasma, under the assumption that the flaring loop evolves throuh a series of “quasi-static” loop states. In all cases the analysis has made use of the observed constancy of the normalized ratio between the radiative and conductive cooling time $`\mu `$ (see Sect. 5.2) to ascertain that the flaring loop(s) are cooling through a sequence of quasi-static states, and of the small value of the heating function present as a parameter in the quasi-static formalism to establish the lack of additional heating during the flare decay phase. The loop lengths<sup>1</sup><sup>1</sup>1All along the present paper, the term “loop length” will be used to indicate the length from the footpoint to the apex of the loop, i.e. actually its “semi-length”. thus derived range between $`1`$ and $`6\times 10^{11}`$ cm (i.e. between 0.4 and 2.4 $`R_K`$ – see Table 3). These large loops (comparable or larger in size than the stellar radius, unlike the solar case, in which flaring structures are small compared to the solar radius) are however shorter than the coronal pressure scale height because of their high temperatures and the low surface gravity of the K-type subgiant Algol secondary. The corresponding plasma densities, derived within the same framework, range between $`5`$ and $`26\times 10^{10}`$ cm<sup>-3</sup>. The general picture for the flaring regions observed on Algol is therefore one of large and rather tenuous loops, a natural consequence of the very energetic and long-lasting flares if no heating is indeed present during the decay phase. In the solar case, in addition to the “compact” flares, in which the plasma appears to be confined to a single loop whose geometry does not significantly evolve during the event, a second class of flares is usually recognized, i.e. the “two-ribbon” flares, in which a disruptive event appears to open up an entire arcade of loops, which subsequently closes back, leading to the formation of a growing system of loops whose footpoints are anchored in H$`\alpha `$-bright “ribbons”. Two-ribbon flares are generally characterized by a slower rise and decay, and a larger energy release. Compact flares have often been considered to be due to “impulsive” heating events, while the longer decay times of two-ribbon events have been considered as a sign of sustained heating. However, also in the case of compact flares sustained heating has been shown to be frequently present (Reale et al. (1997)), so that the distinction may indeed be less clear than often thought. Long-lasting stellar flares have at times considered as analogs to solar two-ribbon flares (due to their longer time scales, e.g. Pallavicini et al. (1990)). However, the only available theoretical framework so far available to model this type of event (Kopp & Poletto (1984)) relies on a large number of free parameters and assumptions (such as the conversion efficiency of the magnetic field into X-rays and the assumption of instantaneous dissipation of the reconnection energy). As a consequence the physical parameters of the flaring regions derived with this approach are, for spatially unresolved events such as the stellar ones, rather strongly dependent on some specific assumptions, so in practice most stellar flares have been modeled as compact events. We will follow the same approach here, however keeping in mind the possibility that the event may not be necessarily described as a compact one. We have performed a long ($`240`$ ks elapsed time, covering a full orbit of the system) observation of Algol with the BeppoSAX X-ray observatory, aiming at studying both the spectral and the temporal characteristics of its X-ray emission. During the observation a very large and long-lasting flare was observed. We present in this paper a detailed analysis of the characteristics of this flare, deriving the temporal evolution of the spectral parameters of the plasma (temperature $`T`$, emission measure $`EM`$, coronal abundance $`Z`$, absorbing column density $`N(\mathrm{H})`$), and subsequently applying different methods to the analysis of the flare decay in order to derive the physical characteristics of the flaring region. For this purpose we have applied both the quasi-static decay method of van den Oord & Mewe (1989) and the method of Reale et al. (1997), which allows for the possibility of (exponentially decaying) sustained heating during the flare decay, and simultaneously deriving both the time scale of heating and the size of the flaring loop. In line with the previous analyses of large flares on Algol, the analysis of the flare’s decay using the quasi-static approach results in a long and tenuous flaring loop, although in this case the derived loop size and density are more extreme given the exceptionally long duration and peak temperature of the event. One unique characteristic of the flare studied here is that its emission underwent a total eclipse coincident with the secondary optical eclipse. This allows (as discussed in detail in a companion paper, Schmitt & Favata (1999)) to put a firm upper limit to the size of the flaring region, and thus to compare, for the first time in a context other than the solar one, the length derived through the analysis of the flare decay with the geometrical size of the emitting region. This comparison shows that the loop sizes derived from the analysis of the flare decay through the quasi-static method are significantly larger than the geometrical size of the flaring region. Therefore the actual flaring region must have had a much larger plasma density, and sustained heating must have been present during all the decay phase to explain the long observed decay time. The method of Reale et al. (1997) produces a large range of allowed loop lengths, which at the lower end overlap with the size derived for the flaring region from the eclipse analysis. Also, this type of analysis points to the presence of significant sustained heating during the decay phase. The metal abundance of the flaring plasma is seen to vary significantly during the course of the flare’s evolution. Abundance variations during the evolution of the flare were already hinted at in the analysis of the GINGA (Stern et al. (1992)) and ROSAT (Ottmann & Schmitt (1996)) flares, and evidence for this type of effect has been reported for flares on other stars. The combination of high statistics, good spectral resolution and wide spectral coverage of the present Algol observation make it however possible for the first time to quantitatively derive the evolution of the plasma abundance. Finally, large variations of absorbing column density are also observed during the early evolution of the flare, hinting at the possibility of a coronal mass ejection taking place in association with the onset of the flare. Although flares of different types from several classes of coronal sources have been discussed in detail in the literature in the past, the large flaring event on Algol discussed in the present paper is exceptional for several reasons: * Its long duration (almost two days) associated with its high luminosity allows for high signal-to-noise spectra to be collected with a time resolution small compared with the time scale of flare evolution, and thus to analyze in detail the temporal evolution of the plasma parameters with small statistical errors, and on different time scales. * The complete, uninterrupted time coverage, from several thousand seconds before the onset of the flare until the end of its decay allows for its complete temporal evolution to be studied. * The occurrence of a *total* eclipse of the flaring plasma by the primary star allows, for the first time, for a geometrical determination of the size of the flaring structure (see Schmitt & Favata (1999)), which can then be compared with the loop lengths derived through an analysis of the decay of the spectroscopic parameters. This allows for a critical test of the assumptions underlying these techniques, which are the only ones available when confronted with stellar flares with no spatial resolution. * The high X-ray flux and spectral temperature associated with this event, together with the unprecedented energy coverage offered by the instrumental complement of the BeppoSAX observatory allow for the spectrum of the flaring plasma to be studied between 0.1 and 100 keV, thus removing the uncertainties on the temperature of the flaring plasma during the hottest phases of large flares (where even the ASCA instruments can only provide lower limits to the temperature) which have characterized previous analyses of large flares. At the same time, the spectral shape can be critically determined, in particular looking for the presence of non-thermal spectral components. ## 2 The BeppoSAX observation The BeppoSAX observatory (Boella et al. (1997a)) features different instruments, four of which were used in the analysis of our Algol observation, i.e. the LECS (Parmar et al. (1997), which covers the energy range 0.1–10 keV), the two MECS detectors (Boella et al. (1997b), MECS-2 and MECS-3, covering the range 1.6–10 keV) and the PDS detector (Frontera et al. (1997), covering between 15 and $`300`$ keV – only data in the 15–100 keV band were used in the present paper). The BeppoSAX observation of Algol covered a complete binary orbit (i.e. $`240`$ ks elapsed time). It started on Aug. 30, 1997 at 03:04 UT (shortly before the primary optical eclipse) and lasted until Sep. 1, 1997 at 20:32 UT. Approximately 20 ks after the beginning of the observation, a very strong flare began, whose evolution dominates the rest of the observation. A detailed analysis of the total eclipse of the flare as seen in the MECS detectors is presented by Schmitt & Favata (1999), who derive the corresponding geometrical constraints on the size and shape of the flaring region. The present paper will concentrate on the spectral analysis of the X-ray emission and on the analysis of the characteristics of the flaring region from the flare decay, using the complete spectral range covered by the BeppoSAX detectors. ### 2.1 Data reduction Telemetry files, containing both information on individual detected X-ray photons and house-keeping data were obtained from the observation tapes for each instrument, and data for each instrument were individually processed with the saxdas software (available from the BeppoSAX Scientific Data Center – hereafter SDC, reachable at http://www.sdc.asi.it), with the default settings, producing FITS-format linearized photon event files for each instrument. For the three imaging instruments (LECS, MECS-2, MECS-3) standard extraction regions were used, i.e. 8.2 and 4.0 arcmin diameter circles centered on the source, for the LECS and MECS data, respectively. The background was extracted from regions of the same size and location as the source extraction region from the standard background files supplied by the SDC. Spectra and light-curves both for the source and the background were extracted using the xselect software. PDS spectra and light-curves were extracted using the saxdas-supplied packages, which directly produce background-subtracted spectra and light-curves. ### 2.2 Light curves The background-subtracted light-curve in the 15–100 keV band, extracted from the PDS data, is shown in Fig. 1, binned in 6000 s intervals, while the background-subtracted light-curve in the 1.6–10.0 keV band, extracted from the MECS-3 detector is shown, binned in 600 s intervals, in Fig. 2. The light-curve for a softer band (0.1–0.5 keV) derived from the LECS data is shown in Fig. 3, binned in 900 s intervals. The LECS is operated during Earth night only, resulting in a lower observing efficiency and in the larger data gaps seen in Fig. 3 with respect to Fig. 2. Inspection of Fig. 2 shows that the flare (as seen in the 1.6–10 keV band) has a rather slow rise (with $`30`$ ks between the flare start and the peak. The decay is for the first $`15`$ ks relatively rapid, on time scales comparable with the rise, but then slows down becoming very nearly exponential. The eclipse due to secondary is well visible, between $`130`$ and $`160`$ ks from the beginning of the observation. The exponential decay is interrupted, at $`200`$ ks, by the onset of yet another flare. The hard (15–100 keV) X-ray light curve shows a slow rise, similar to the one seen in the 1.6–10 keV band, but a faster decay. The hard X-ray count rate returns to its pre-flare value at $`130`$ ks from the beginning of the observation, so that the eclipse of the flaring plasma is not visible in this band. Also, there is no evidence for hard pre-flare emission, which could in principle have been due to non-thermal emission associated with fast particles. The soft-band (0.1–0.5 keV) light curve differs significantly from the 1.6–10 keV band light curve, as the rise is slower, and the decay “bounces back”, after $`10`$ ks, for $`20`$ ks. These differences are discussed in more detail in Sect. 4.2. ### 2.3 Spectra The high-count rate of the Algol observation in the LECS and MECS detectors allows for detailed time-resolved spectroscopy. The observation has therefore been split in 28 separate time intervals, with boundaries coinciding with observational gaps due to the Earth blocking the source, and with each segment covering one or more spacecraft orbits. The extent of each time segment is shown, together with a number used in the following to refer to them, in Figs. 2 and 3, together with the light-curve of the observation as measured in the MECS-3 and LECS detectors respectively. The time of optical eclipses is also indicated. Individual spectra have been extracted, in the LECS, MECS and PDS detectors, for each of the time intervals indicated in Fig. 2. Standard response matrices have been used to analyze the MECS spectra (again as available from the SDC), while the LECS response matrix was computed using the lemat program. The standard PDS pipeline already produces background-subtracted spectra and thus no further manipulation was necessary. The standard SDC-provided response matrix has been used to fit the PDS spectra. All the spectral analysis described here has been performed using the xspec version 10.00 software. Each of the individual LECS and MECS spectra accumulated during the intervals marked in Fig. 2 has been rebinned, prior to fitting, to have at least 20 counts per (variable-size) bin, and the statistical weight of each bin for the purpose of determining the $`\chi ^2`$ has been determined using the statistic of Gehrels (1986), more appropriate than the Gaussian approximation for small number of events. Due to a know discrepancy between the normalization of the response matrices for the LECS and MECS detectors, when LECS and MECS data are fit together it is necessary to add a relative normalization factor to the fit, with experience showing that the MECS normalization is about 10 to 20% higher than the LECS one. It is not possible however to determine a priori the exact value of the normalization, as this is a function of the source position in the field of view as well as of the source spectrum. We have therefore first performed a set of two-temperature fits on the time-resolved spectra, leaving the relative normalization of the LECS and MECS detectors as an additional free parameters. The average value of the MECS to LECS normalization thus determined is 1.15, with a range from 1.03 up to 1.24. The behavior of the fit parameters and of the quality of the fits was then determined by performing the same fits with the relative normalization fixed to the average value of 1.15. A comparison of the probability levels of the two sets of fits shows that leaving the relative normalizations free to vary does not improve the fits, and all the fits discussed in the following were thus performed with the relative normalization fixed at 1.15. All of the spectra discussed here have been fitted with a combination of absorbed thermal emission models. The plasma emission code used has in all cases been the mekal model as present in xspec, which implements the optically-thin, collisional ionization equilibrium emissivity model described in Mewe et al. (1995). The presence of absorbing material has been accounted for using a wabs component in xspec, which implements the Morrison & McCammon (1983) model of X-ray absorption from interstellar material. The metal abundance of the emitting plasma is considered as a free parameter. Abundance values are in the following determined as relative to the “solar” abundance, as determined by Anders & Grevesse (1989). While the global metal abundance was left free to vary, abundance ratios were kept fixed in the fitting process. ## 3 The quiescent spectrum Most of the BeppoSAX observation of Algol is occupied by the large flare, and the quiescent emission is visible only in a small time interval, i.e. before the flare itself (during the intervals marked 0 and 1) and during the total phase of the eclipse of the flaring plasma (interval 18). These three spectra have been individually fit with a two-temperature absorbed spectral model with freely varying global metal abundance. Their best-fit parameters are the same within the $`1\sigma `$ confidence range, and they have therefore been summed together in a single spectrum representative of the average “quiescent” emission, from which average spectral parameters have been determined, as listed in Table 1. Although significantly colder than the flaring emission, the plasma responsible for the “quiescent” emission of Algol still has a high temperature; the 3.2 keV (44 MK) observed here are somewhat higher than the $`2.5`$ keV (29 MK) reported on the basis of ASCA observations by Antunes et al. (1994). The spectral parameters of the quiescent emission from the ROSAT PSPC observation of Algol (which has a good out-of-flare phase coverage) show the presence of orbital modulation (Ottmann & Schmitt (1996)), and thus a proper subtraction of the quiescent emission from the flare spectra would require this effect to be taken into account. Given the very scant phase coverage of the quiescent spectra for the BeppoSAX observation, however, we cannot determine the phase dependence of the spectral parameters of the quiescent emission (if present). The quiescent spectrum has therefore assumed to be constant, with parameters given by the average of the best-fit parameters of the three quiescent spectra. This average quiescent spectrum has been added as a constant (“frozen”) model component in our subsequent analysis of the flare spectra. ### 3.1 The H column density Welsh et al. (1990) report an upper limit, to the H column density toward Algol, of $`2.5\times 10^{18}`$ cm<sup>-2</sup> (based on an upper limit to the equivalent width of interstellar Na i lines); Stern et al. (1995) assume (based on these data) a value of $`2\times 10^{18}`$ cm<sup>-2</sup> in their analysis of the Algol EUVE spectrum. However, Ottmann & Schmitt (1996) find, from the analysis of the ROSAT PSPC spectra, a value of H column density higher by approximately one dex (1 to $`2\times 10^{19}`$ cm<sup>-2</sup>). Higher values are found (again from PSPC data) by Singh et al. (1995), who report $`3.9\pm 1.0\times 10^{19}`$ cm<sup>-2</sup>, and by Ottmann (1994) who report a range of values (varying with the orbital phase) ranging between 3 and $`8\times 10^{19}`$ cm<sup>-2</sup>. All X-ray derived values of the absorbing column density are thus significantly higher than the values derived from the EUVE data, and the BeppoSAX quiescent spectrum is no exception, with a 68% confidence range of 5.4–$`13\times 10^{19}`$ cm<sup>-2</sup>, comparable with the PSPC derived range of values of Ottmann (1994). Similar discrepancies between the X-ray and EUV-derived H column densities are also observed for other active stars. For example, the H column density toward the RS CVn system AR Lac has been estimated (using the ratio of the 335 and 361 Å Fe xvi lines in the EUVE spectrum) by Griffiths & Jordan (1998) at $`2\times 10^{18}`$ cm<sup>-2</sup>, while Ottmann & Schmitt (1994) derive, from the PSPC spectrum, a value of $`3\times 10^{19}`$ cm<sup>-2</sup>. The value derived from the BeppoSAX spectrum is $`6\times 10^{19}`$ cm<sup>-2</sup> (Rodonò et al. (1999)). We have tried to fit the quiescent X-ray spectrum with a fixed, low H column density. In practice, any value of $`N(\mathrm{H})`$ lower than few parts in $`10^{19}`$ cm<sup>-2</sup> results in fits with significantly higher plasma metal abundance than the $`1/3Z_{}`$ obtained by leaving $`N(\mathrm{H})`$ as a free parameter: to balance the higher soft continuum, the fit process increases the metal abundance, so that the emission from the Fe L complex also increases (see Favata et al. (1997a)). As a result, however, the emission from the Fe K complex seen in the MECS detector becomes significantly over-predicted. To assess the significance of the above discrepancies we have simulated the ability of LECS spectra to determined low values of the H column density and the impact of possible calibration errors on the best-fit column density. Even assuming a perfect calibration, with the $`S/N`$ of the present spectra column densities of $`10^{19}`$ cm<sup>-2</sup> cannot be distinguished from zero column densities (although above $`10^{19}`$ cm<sup>-2</sup> their effect becomes visible, see above). The systematics in the LECS calibration are estimated to be at the level of $`5`$%, although they could be higher (up to $`10`$%) in the low-energy (0.1–0.2 keV) range, where the very small absolute area and its steep slope as a function of energy make the calibration very difficult (A. Parmar, private communication). The best-fit column density is very sensitive to possible calibration errors at low-energies: our simulations show that a $`10`$% calibration error in the spectral region below 0.2 keV could lead to a systematic bias in the best-fit column density for stellar spectra of few $`10^{19}`$ cm<sup>-2</sup>, comparable with the best-fit column density for the quiescent spectrum. Thus, the absolute value LECS-derived best-fit column densities of this order should be regarded with caution as they critically depend on the calibration being known to better than the currently estimated uncertainty. Relative changes in the absorbing column density (as seen during the flare decay here) would not be affected by such errors, nor would absolute values when the column density becomes considerable ($`>10^{20}`$ cm<sup>-2</sup>). The influence of such low-energy calibration errors on all other spectral parameters is negligible. ## 4 Spectral evolution of the flaring emission For the purpose of determining the time evolution of the spectral parameters of the flaring plasma, the individual time-resolved joint LECS/MECS spectra have been fitted with with a one-temperature absorbed thermal model with freely varying coronal abundance, plus a two-temperature model with fixed spectral parameters, set to the average values determined for the quiescent spectral emission. The one-temperature model being fit thus represents the flare emission as visible on top of the quiescent spectrum. The resulting best-fit parameters for the flaring component are reported in Table 2. The temporal evolution of the best-fit temperature, emission measure, abundance and absorbing column density for the flaring component is illustrated in Fig. 4. The one-temperature fits to the flaring plasma emission provide a good description (i.e. a $`\chi ^2`$ value close to 1.0) to most of the spectra, with the exception of the spectra accumulated during the time-intervals 5 and 6, which yield a reduced $`\chi ^2`$ which is formally unacceptable, corresponding to a very low probability level. All other fits yield a reduced $`\chi ^2`$ corresponding to a probability that the adopted model gives a satisfactory description of the data of $`10`$%. We tried to improve the quality of the fit to the spectra accumulated during time intervals 5 and 6 by adding an additional thermal component to the flare spectrum, i.e. assuming that the flare spectrum is better described by a two-temperature model. While the additional degrees of freedom lead to an improved $`\chi ^2`$ for flare spectra 5 and 6 when a two-temperature model is used, the implied probability level for the fit is still very low, and the two-temperature model is still not a good description of the data. In addition, if a two-temperature model is fitted to the flaring spectra for time intervals other than 5 and 6, the resulting spectral parameters are much more poorly determined, with a high degree of degeneracy apparent if the confidence regions are examined. One obvious feature in the fit residuals for intervals 5 and 6 (which is not altered by the use of a two-temperature model) is the bump at $`1.2`$ keV. A similar excess in the spectrum is visible, for example, in the ASCA SIS spectrum of Capella discussed by Brickhouse (1998) and Brickhouse et al. (1999), which they attribute to a complex of lines from Fe xvii to Fe xix from atomic levels with high quantum numbers ($`n>5`$). These lines are missing in current models, which will then consistently under-predict the emission in this region, and are likely contributing to the higher $`\chi ^2`$ values found for these time intervals. While one possible way of decreasing the residuals would be to allow selected abundance ratios to vary during the fit, this would not however be granted for the rest of the flare spectra, and doing it only for intervals 5 and 6 would again yield results which cannot be compared with the rest. Also, as shown by Brickhouse et al. (1999), the lack of the high-excitation Fe lines in the model yields spurious abundances for the other elements if they are left free to vary; we will therefore not explore this possibility further in the present paper. From Fig. 4 the good temporal coverage of the evolution of the flare is evident, from its onset all the way to its disappereance. Note the gap between $`130`$ and $`160`$ ks due to the eclipse. The general behavior of the flare is similar to the one commonly observed in the several large stellar and solar flares studied so far, with the temperature peaking at the beginning of the flare and the emission measure rising more slowly and peaking at a later time. The shape of the temperature decay is close to exponential throughout the flare (although it briefly increases again between $`50`$ and $`70`$ ks). The emission measure increases slowly for a long time ($`20`$ ks) after the temperature has peaked, and its decay is not well described by a single exponential, with a more rapid decay observable in the first $`20`$ ks of the flare, and a longer decay time-scale observable afterwards, closely mirroring the behavior of the 1.6–10 keV light curve. ### 4.1 Abundance variations during the flare As discussed in Sect. 1, previous observations of strong flares on Algol performed with the GINGA and ROSAT observatories hinted at variations of the coronal metal abundance during the flaring event. However, the limited spectral coverage and resolution of the GINGA and ROSAT proportional counters made it difficult to fully disentangle abundance effects from other effects, such as changes in the absorbing column density for the PSPC or changes in the temperature structure in the case of GINGA. The combination of resolution and spectral coverage of the LECS and MECS detectors allows to effectively disentangle the effects of the plasma metal abundance on the emitted spectrum from the thermal structure (for a discussion see Favata et al. (1997a), b). This, coupled with the excellent time coverage of the BeppoSAX observation and the slow flare decay, allows to study in detail the evolution of the coronal metal abundance during the flare. The abundance of the quiescent plasma (i.e. the best-fit value of the two-temperature model to the spectra accumulated during the intervals 0, 1 and 18) is $`0.3`$ times the solar photospheric one, a value compatible with the abundance derived for the quiescent Algol corona by Stern et al. (1995) on the basis of an analysis of the EUVE spectrum. The temporal evolution of the best-fit abundance of the flaring plasma is shown in Fig. 4. Consistent with the indications of the ROSAT and GINGA data, the metal abundance of the flaring plasma increases significantly during the early phases of the flare, to a value of approximately 1.0, and then rapidly decays back to a value consistent with the one determined from the pre-flare spectrum. The time scale for the increase of the coronal abundance is similar to the time scale with which the emission measure increases, while the abundance “decay” time is significantly faster than either the flare temperature or the emission measure decay times, so that the coronal abundance goes back to its pre-flare value while a significant flaring component is still well visible in the spectrum (with significant excess emission measure and a temperature of some $`4`$ keV). The decaying part of the time evolution of the best-fit abundance is well fit with a single exponential, with an $`e`$-folding time of 36 ks, with the further evolution compatible with a constant abundance. The effect of the varying abundance in the flaring component is clearly visible in the set of spectra plotted in Fig. 5. The pre-flare spectrum (from segment 1) is well fit by a two-temperature plasma with abundance $`Z0.3`$, and a maximum temperature $`T4`$ keV. The spectrum immediately following (from segment 2) marks the beginning of the flare, and shows the presence of the very hot flaring component (with $`T12`$ keV), although still with relatively little emission measure and only a small enhancement ($`2`$ times) in the plasma abundance. In the subsequent spectrum (from segment 3) the temperature has started decaying, while the emission measure is still rising. The plasma abundance has reached its peak at $`Z1`$, so that the Fe K complex is well visible. The spectrum from segment 5 is characteristic of the flare’s emission measure peak. The temperature has decreased to $`8.6`$ keV, while the abundance is still close to the solar value. In the spectrum from segment 12 the emission measure has decayed to less than half the peak value, the flare temperature is down to $`5.4`$ keV and the abundance is back to almost the quiescent value ($`0.35`$), as again evident in the near disappearance of Fe K complex. Could the changes in the best-fit plasma abundance parameter be explained through mechanisms other than a physical change in the abundance? With the exception of two of the time intervals discussed above, the model used yields fully acceptable reduced $`\chi ^2`$ values, thus making it not necessary to invoke, from a statistical point of view, any other mechanism. For the flare spectra the fit process is driven, in the determination of the abundance, essentially by one diagnostic, i.e. the intensity of the Fe K line complex. For most coronal sources the intensity of the lines in the rich Fe L line complex will be a more important diagnostic of metallicity, given the much higher signal-to-noise ratio and thus statistical weight of the lines in this region of the spectrum, where both the intrinsic photon flux and the instrument’s effective area are higher. However, when the emission is dominated by a plasma at temperatures higher than $`5`$ keV (as in the case of the Algol’s flare spectrum), the spectrum shows very little line emission (specially from Fe L lines), and the only prominent line is the one due to the Fe K complex. Therefore, changing the abundance in the flaring component during the hot phase of the flare leaves the model spectrum essentially unchanged except for the intensity of the Fe K complex. To show this, we have fit the spectrum from interval 5 with the same type of model (one-temperature on top of the quiescent emission), but excluding from the fit the Fe K region (i.e. channel energies from 5.5 to 8.0 keV) and with the abundance of the flaring component fixed to $`Z=0.4\times Z_{}`$ (i.e. compatible with both the quiescent abundance value and the late-decay flaring value). The resulting best-fit model is shown in Fig. 6, in which the Fe K region has also been plotted. Inspection of the residuals (and their comparison with the relevant panel of Fig. 5) shows that $`Z=0.4\times Z_{}`$ model provides a good fit to all of the spectrum (with the residuals showing the same structure and size as in the case of the $`Z1.0Z_{}`$ model), but will, as expected, strongly under-predict the Fe K complex. As evident from the size of the residuals at the position of the Fe K complex, they are driving the abundance determination in the flaring emission. Restricting the fit to a higher-energy interval, e.g. to the spectral region harder than 3 keV, does not change the best-fit abundance, although the resulting confidence region are of course broader. Fluorescence from the heated photosphere has been proposed as a possible mechanism for the enhancement of the Fe K emission during a strong flare. In this case, however, the emission would come from low ionization states of Fe, and the line energy would thus be significantly different ($`E6.4`$ keV). At the resolution of the MECS detectors, and at the signal-to-noise ratio of the spectra discussed here, such large shift in energy of the line centroid would easily be seen. We have determined the 90% upper limit to the intensity of a 6.4 keV Fe K line to be $`10^3`$ the intensity of the observed 6.8 keV line complex, so that this explanation can be ruled out. The same lack of shift in the line energy allows to rule out significant non-equilibrium ionization effects. Also, the equilibrium time scale for a plasma of this high temperature is of order of few tens of seconds, negligible in comparison with the long time scales on which the abundance is observed to vary. Given the relative simplicity of the formation mechanism of the Fe K line complex, as well as the absence of other contaminating features in the spectrum, other explanations of the observed strong increase of Fe K emission than an actual abundance increase in the emitting plasma appear rather difficult to find. ### 4.2 Absorbing material Inspection of Table 2 shows that the best-fit value for the absorbing column density during the initial phase of the flare is very high ($`N(\mathrm{H})10^{21}`$ cm<sup>-2</sup>). This high best-fit absorbing column density is clearly driven by the depressed soft continuum of the initial flare spectra. A comparison of the spectra observed during time intervals 1 (pre-flare) and 2 and 3 (beginning of the flare) as plotted in Fig. 5 shows that while the emission above $`1`$ keV increases considerably once the flare begins, the emission in the region below $`0.5`$ keV is essentially unchanged from its pre-flare values. Given the high temperature of the flaring component ($`T10`$ keV) in this phase (which implies an essentially featureless continuum for the flaring spectrum, with the exception of the Fe K complex emission), the only way to obtain a spectrum significantly depressed in the softer region is by introducing an absorbing column density for the flaring component only. Indeed, inspection of the best-fit model spectrum shows that during time intervals 2 and 3 the flaring spectrum is completely absorbed below $`0.5`$ keV, so that the emission visible is only due to the quiescent component (for which the absorbing column density is fixed). Later during the flare evolution the absorbing column density decreases, so that the soft emission (as visible for time interval 5 in Fig. 5) increases by a factor of $`10`$, to an intensity proportional to the harder part of the spectrum. The change in absorbing-column density during the flare evolution is also visible in the light curves in the soft (0.1–0.5 keV) and hard (1.6–10.0 keV) band (i.e. Figs. 2 and 3): in the soft band the flare only begins in interval 3, while in the hard band the count rate has already risen by a factor of 10 at the end of interval 2. This apparent “delay” in the flare onset in the softer band can be explained by the high best-fit absorbing column density. The soft-band light-curve also shows a much slower decay than the hard-band one, with an actual rate increase between intervals 6 and 8, and an essentially flat behavior between intervals 8 and 12. Is the best-fit absorbing column density actually due to real material in the line of sight? An obvious alternative explanation would be that the fit process uses the absorbing column density to compensate for deficiencies in the plasma emission codes. However, given the simplicity of a thermal spectrum at the high temperature observed at the beginning of the flare (dominated by continuum emission) it is hard to imagine that the softer part of the spectrum could be under-predicted by a factor of ten. We thus consider the presence of local absorbing material, depressing the soft emission at the beginning of the flare, to be the correct explanation. One possible interpretation for this is in terms of a massive coronal mass ejection taking place at the beginning of the flare, which provides the absorbing material. ### 4.3 The PDS spectra The PDS detector on-board the BeppoSAX observatory is sensitive to X-rays approximately in the passband from $`15`$ up to $`300`$ keV. Most coronal sources have too little flux in this band to be detected. However, given its very high temperature, the emission from the Algol flare produces an easily detectable signal also in the PDS. We extracted individual PDS spectra from the time intervals 3 to 7 (inclusive), using the tools provided within the saxdas package, and jointly fitted the LECS, MECS and PDS data for these intervals, with the same model (a one temperature model superimposed with a two-temperature quiescent emission) which we used for the analysis of the LECS and MECS only. Unfortunately, while it would have been desirable to include the PDS data in the analysis of the time dependence of the flare parameters, the temporal coverage afforded by the PDS data is more limited, and for most of the decay phase no PDS spectra with sufficient signal to noise are available. Thus, to keep the set of parameters for the flare decay homogeneous, we have only used the LECS and MECS data in that context. The sequence of time-resolved PDS spectra is shown, together with the best-fit model, in Fig. 7. One of the main questions for which the PDS data are of interest is whether any non-thermal spectral components are present in the flare spectrum. The spectra of Fig. 7, together with their residuals, show that the PDS spectrum of flare peak is well described as the high-energy tail of the hot flaring plasma and no additional non-thermal components appear to be present. All of the PDS spectra present a “bump” in the region close to $`50`$–60 keV (particularly visible in the spectrum from interval 4), plus a second minor bump close to 30 keV. Such features are, according to PDS calibration team (D. Dal Fiume, private communication) of instrumental origins, due to a blend of residual lines from the Am calibration source with instrumental fluorescence features due to the Ta collimator (feature at 50–60 keV), while the feature at 30 keV is the escape peak of the feature at higher energy. The temperature of the flaring component derived from the joint LECS, MECS and PDS fit is, within $`1\sigma `$, the same as the temperature derived from the LECS and MECS data only, thus confirming that the spectrum seen in the PDS detector is the tail of the thermal spectrum seen in the LECS and MECS passbands. ### 4.4 Energetics From the detailed temporal evolution of the flare’s spectral parameter it is possible to derive the instantaneous luminosity in the 0.1–10 keV band and therefore, by integration, the total energy emitted in soft X-rays during the flare. The temporal evolution of the X-ray luminosity is shown in Fig. 8. The total X-ray radiative loss at the end of the flare is $`1.4\times 10^{37}`$ erg, making this one of the most energetic X-ray flares ever observed on stellar sources, on a par with the long flare observed on CF Tuc by Kürster & Schmitt (1996). Given the short thermodynamic decay time of the loop implied by the eclipse-derived size ($`10`$ ks, see Sect. 6), the loop has, on the total time scale of the flare, negligible thermal inertia, i.e. it will quickly respond to changes in the heating. Thus, the observed X-ray luminosity temporal evolution closely describes the temporal evolution of the heating, slowly rising for some tens of ks and then decaying for more than 100 ks. Whatever the mechanism ultimately responsible for the heating is, it thus must be capable of operating on these time scales. ## 5 Analysis of the flare decay Different approaches have been proposed to determine, from the temporal evolution of the flare temperature and emission measure, the parameters, and in particular the size (and thus density) of the flaring region. One unique feature of the Algol flare discussed here is the availability of an actual measurement of the physical size of the flaring region from the duration and shape of its eclipse (Schmitt & Favata (1999)). Thus, for the first time we can test whether the different methods used to analyze the flare decay phase yield answers consistent with the size of the flaring region implied by the eclipse. ### 5.1 Assumptions All the methods discussed in the literature rely on the analysis of the decay phase of the flare, defined as the phase of the flare during which both the temperature and the total number of emitting particles (using the emission measure, or more properly its square root, as proxy) decrease. This is equivalent to saying that the total energy content of the flaring loop is decreasing. ### 5.2 The quasi-static cooling formalism In this Section we follow the quasi-static formalism for the analysis of the decay phase of a flare, as developed by van den Oord & Mewe (1989) and as applied, for example, to the analysis of the large PSPC flare on Algol by Ottmann & Schmitt (1996). In this framework the information about the linear size of the flaring loop is derived from the observed relaxation times of the cooling plasma, under the assumption that the two relevant mechanisms for energy loss from the flaring plasma are radiation and thermal conduction along the loop. The ratio between the conductive and radiative time scales ($`\tau _\mathrm{c}/\tau _\mathrm{r}`$) is in general unknown, given that $`\tau _\mathrm{c}`$ will depend on the geometry of the flaring loop(s). The quasi-static formalism requires this ratio to be constant. Under this condition the geometry of the loop can then be determined. Following van den Oord et al. (1988), the effective decay time for the thermal energy during the loop decay is defined as $$\frac{1}{\tau _{\mathrm{eff}}}=\frac{1}{\tau _\mathrm{r}}+\frac{1}{\tau _\mathrm{c}}=\frac{(1+\gamma /2)}{\tau _T}+\frac{1}{2\tau _\mathrm{d}}$$ (1) where $`\tau _T`$ is the decay time of the loop temperature, $`\tau _\mathrm{d}`$ is the decay time of the light-curve and $`\gamma `$ is the power law exponent of the temperature dependence of the radiative cooling function $`\mathrm{\Psi }`$ of an optically thin plasma hot plasma. This function is usually parameterized as $`\mathrm{\Psi }(T)=\mathrm{\Psi }_0T^\gamma `$, and for sufficiently hot plasmas ($`T20`$ MK) the values $`\mathrm{\Psi }_0=10^{24.73}`$ and $`\gamma 0.25`$ provide a very good parameterization of the radiative cooling losses. Note that at these temperatures the emission is dominated by the continuum, with very little line contribution, and hence the emissivity does not depend on the metal abundance, making this approach applicable even in the presence of observed abundance variations. The radiative and conductive decay times are defined as $$\tau _\mathrm{r}=\frac{3n_0k_\mathrm{B}T_0}{n_0^2\mathrm{\Psi }_0T_0^\gamma }$$ (2) and $$\tau _\mathrm{c}=\frac{3n_0k_\mathrm{B}T_0}{8\kappa _0T_0^{7/2}f(\mathrm{\Gamma })(\mathrm{\Gamma }+1)L^2}$$ (3) where $`n_0`$ and $`T_0`$ are the electron density and the temperature at the beginning of the decay phase and $`L`$ and $`\mathrm{\Gamma }`$ are the length and expansion factor of the loop. $`f(\mathrm{\Gamma })`$ is a correction factor accounting for the change in the conductive flux for tapered loops. The adopted value for the conductivity is $`\kappa _0=8.8\times 10^7`$ erg cm<sup>-1</sup> K<sup>-7/2</sup>. To determine whether the quasi-static formalism is applicable in principle, the constancy of the ratio between the radiative and conductive cooling time during the flare decay has to be checked. This ratio can be written as (Eq. (27) of van den Oord & Mewe (1989)): $$\mu =\frac{\tau _\mathrm{r}}{\tau _\mathrm{c}}=C\times \frac{T^{13/4}}{EM},$$ (4) where $`C`$ is a constant incorporating all the geometrical factors and the subscripts to $`T`$ and $`EM`$ indicate the power of 10 which the relevant quantity has been normalized to (in cgs units). In a quasi-static cooling phase this quantity must be constant. The time evolution of $`\mu `$ for the Algol flare discussed here is plotted in Fig. 9, using the values of $`T`$ and $`EM`$ shown in Fig. 4. Once the emission measure has reached its peak (i.e. in time interval 5) $`\mu `$ is constant within the error bars, although with some evidence for a slight increase between time intervals 6 to 8, which may indicate additional heating, as observed in the time evolution of the plasma temperature (see Fig. 4). #### 5.2.1 The shape of the decay The formalism discussed above implicitly assumes that a decay time can be uniquely defined, i.e. that the relevant quantities decay in a smooth and monotonic way. For the flare discussed here it is evident that the decay phase encompasses different phases: a first phase (intervals 4–5) in which the light curve as well as the derived temperature and emission measure are decaying exponentially on a rather fast $`e`$-folding time scale, a phase in which the decay has slowed down and in which the temperature is increasing again (intervals 6–8, obviously indicative of the presence of prolonged heating) and a final long decay phase in which the light curve decay is well described by a slow exponential decay, similar to the temperature and emission measure decays. Thus, the conditions for the quasi-static formalism are not satisfied for the whole decay phase; however, it should be applicable to the phase in which the decay is monotonic and smooth, with an exponential light curve, i.e. between time intervals 8 and 14 inclusive. During this interval the $`e`$-folding times for the temperature and count rate of the flare are 97 and 64 ks respectively, which combine to yield an effective decay time, as defined above, $`\tau _{\mathrm{eff}}=59`$ ks. The temperature at the beginning of the decay phase thus defined is 7.2 keV, or 83 MK, and the emission measure is $`6.4\times 10^{53}`$ cm<sup>-3</sup>. If we apply the formalism of van den Oord & Mewe (1989), we can calculate the size of the flaring loop as a function of the loop expansion factor $`\mathrm{\Gamma }`$, and of the number $`N`$ and the ratio $`\alpha `$ between the loop’s length and its diameter at the base. The resulting relationship between the loop length and $`N\alpha ^2`$ is plotted in Fig. 10. Unless very small loop diameters are postulated (i.e. smaller than one hundredth of the loop length), Fig. 10 shows that the flaring loop must be longer than a few times $`10^{12}`$ cm (i.e. of order 10 stellar radii), independent from any possible loop expansion. #### 5.2.2 Scaling law approximation To further constrain the geometry of the flaring loops, we can follow the approach of Stern et al. (1992), who show that, once the applicability of the quasi-static framework has been verified, the geometric parameters of the flaring loops, as well as the plasma density, can be derived through simple scaling laws from the detailed analysis of van den Oord & Mewe (1989) of the EXOSAT flare of Algol (their Eqs. (19)). The constancy of the ratio between the radiative and conductive cooling time gives a scaling law of the form $$n_\mathrm{A}LT_\mathrm{A}^{(2\gamma +7)/4}$$ (5) where the subscript A refers to the apex of the loop, where the bulk of the emitting plasma is located. Combining the above equation with the definition of the radiative cooling time $`\tau _\mathrm{r}`$ (Eq. (2)), yields $`L\tau T^{7/8}`$ (again for $`\gamma =0.25`$), and $`n\tau ^1T^{6/8}`$. If we scale from the analysis of van den Oord & Mewe (1989), the decay time of the flare observed here is 10.5 times longer, while the temperature at the beginning of the decay phase is 1.44 times higher. Thus, the loop length is 14.5 times longer, or $`2.3\times 10^{12}`$ cm, and the density is 8.0 times lower, or $`3.3\times 10^{10}`$ cm<sup>-3</sup>. The remaining geometrical parameters for the flaring loops can be derived from the definition of the emission measure $$EM\frac{\pi }{8}n_e^2L^3(\mathrm{\Gamma }+1)N\alpha ^2$$ (6) where the relationship is approximate because of the (weak) dependence of the scaling laws on $`\mathrm{\Gamma }`$. Substituting all the values yields $`(\mathrm{\Gamma }+1)N(\alpha /0.1)^2=1.2\times 10^2`$. The loop length predicted by the quasi-static formalism for the flare discussed here is clearly substantial: a length of $`2.3\times 10^{12}`$ cm corresponds to a loop height of $`1.4\times 10^{12}`$ cm, or $`5.7R_\mathrm{K}`$, large but still within the pressure scale height for the K star in Algol (for the hot plasma at the beginning of the decay phase – 83 MK – the pressure scale height for the K star is $`6\times 10^{12}`$ cm, or 24 $`R_\mathrm{K}`$). If the decay time constants are derived from the complete decay phase, i.e. from the moment in which both $`T`$ and $`EM`$ start to decrease (interval 5) the results do not change significantly. In this case the precise form of the decay is ignored, and only the time scales are considered. This approach is also likely to be more consistent with the majority of the published results on stellar flares, in which neither the temporal coverage nor the statistics are sufficient to show the details of the time evolution of the flare parameters, and a single determination of the decay time constants starting from the flare peak is done. For example, for the Algol flare observed by ROSAT, only a sporadic coverage of the flare light curve was available, so that any short-lasting intermediate heating episodes such as the ones evident in this case would not have been detected. In the present case the $`e`$-folding times (derived from the peak of the quantity of interest) for the temperature and count rate are, respectively, 118 and 59 ks, combining to yield an effective decay time of 63 ks, and the temperature at the beginning of the decay is 98 MK. Application of the same scaling laws as above results in a loop length of $`2.8\times 10^{12}`$ cm and a plasma density of $`3.5\times 10^{10}`$ cm<sup>-3</sup>, essentially identical to the results obtained by considering only the exponential part of the decay. #### 5.2.3 Full fits to the scaling-law formalism We have also performed a full fit of the equations of van den Oord & Mewe (1989) to the complete decay phases of the temperature and count rate. The resulting best-fit is shown in Fig. 11. The resulting loop length is $`1.7\times 10^{12}`$ cm with a corresponding plasma density of $`10^{10}`$ cm<sup>-3</sup>. Again, the resulting loop length and density are quite similar to the value obtained with a simple application of the scaling laws to the exponentially decaying part of the light curve, confirming the relative insensitivity of the quasi-static formalism to the details of the starting assumptions. ### 5.3 The slope in the temperature-density diagram A different approach to the analysis of the decay times of spatially unresolved flares has been developed by Reale et al. (1997). This method simultaneously yields estimates for the physical size of the flaring loop as well as for the presence and time scale of heating during the decay phase of the flare. The method uses the slope of the locus of points in the temperature versus density diagram during the flare decay phase (Sylwester et al. (1993)) as a diagnostic of the presence of sustained heating. Under the assumption that the loop volume remains constant during the flare, the square root of the emission measure is used as a proxy for the density; we will refer to this approach, in the following, as the $`EM`$ vs. $`T`$ slope method. Detailed hydrodynamical simulations show that flares decay approximately along a straight line in the $`\mathrm{log}\sqrt{EM}`$$`\mathrm{log}T`$ diagram, and that the value of the slope $`\zeta `$ of the decay path is related to the ratio between the observed decay time of the light curve $`\tau _{\mathrm{lc}}`$ and the “natural” thermodynamic cooling time of the loop without additional heating $`\tau _{\mathrm{th}}`$. The validity of the simulation-derived relationship has then been verified by Reale et al. (1997) with a sample of solar flares observed by Yohkoh. This approach allows to explicitly estimate the intrinsic spread of the derived physical parameters; in practice the estimated loop size agrees, for the solar case, to $`20\%`$ with the actual size. The recalibration of the method for temperatures and emission measures derived with the BeppoSAX MECS detector is discussed by (Pallavicini et al. in preparation). In practice, an observed flare decay with a slope $`\zeta 1.7`$ (again for $`T`$ and $`EM`$ derived with the MECS detector) implies that no additional heating is present ($`\tau _\mathrm{h}=0`$) while smaller values of $`\zeta `$ imply progressively longer heating time scales. The relationship between $`\zeta `$ and $`\tau _{\mathrm{lc}}/\tau _{\mathrm{th}}`$ becomes effectively degenerate at values of $`\zeta `$ smaller than about 0.4, so that $`\zeta 0.4`$ implies a heating time scale comparable to the observed decay time of the flare (see Fig. 2 of Reale et al. (1997)). The loop size is estimated as a function of $`\tau _{\mathrm{lc}}`$, $`T_{\mathrm{max}}`$ and $`\zeta `$ (where $`T_{\mathrm{max}}`$ is the peak flare temperature, not the temperature at the beginning of the decay phase). For a given $`\tau _{\mathrm{lc}}`$ and $`T_{\mathrm{max}}`$, the smaller $`\zeta `$ (and thus the longer the additional heating) the smaller the implied loop length. In a large fraction of the solar flares examined by Reale et al. (1997) significant heating is present, so that the thermodynamic decay time of the loop alone significantly over-estimates its size (by factors between 2 and 10). The evolution of the spectral parameters for the Algol flare is plotted, in the $`\mathrm{log}\sqrt{EM}`$$`\mathrm{log}T`$ plane, in Fig. 12. The set of $`\sqrt{EM}`$, $`\mathrm{log}T`$ pairs plotted includes the rising phase of the flare (up to interval 5), when the emission measure is still building up, and thus the decay has not yet begun. Afterwards the initial very steep decay only lasts for a relatively brief time, as the temperature increases again in intervals 6 to 8. The apparently undisturbed decay begins, as discussed in the framework of the quasi-static method, with interval 8, and, up to interval 14, follows a clean straight line. The last three intervals (which however do not sample very well the light curve, being interrupted by the eclipse) show evidence for a change in the slope, which becomes shallower. Reale et al. (1997) quote four conditions for the method to yield optimal results, i.e. the slope $`\zeta `$ must be greater than $`0.3`$, the light curve must decay exponentially, the trajectory of the decay in the $`\mathrm{log}T`$$`\mathrm{log}\sqrt{EM}`$ plane must be linear and the resulting loop length must well below the local pressure scale height. In the present case the decay is indeed not linear, but there is clear evidence for an episode of reheating, so that the slope $`\zeta `$ is not very well constrained, and the resulting uncertainties are likely to be significantly larger. The choice of which interval of the observed decay to use for the determination of the flare parameters is, in this case, prescribed by the method; it begins when the emission measure peaks and ends with the time interval in which the light curve has decayed to a count rate of 10% of the peak value. The average slope of the decay in the $`\mathrm{log}T`$$`\mathrm{log}\sqrt{EM}`$ plane is $`0.84\pm 0.20`$ (90% uncertainty, as in all of the following). By applying the equivalent of Eq. (3) of Reale et al. (1997), the ratio between the observed light-curve decay time and the thermodynamic decay time of the flaring loop (i.e. the decay time which would be observed in the absence of heating) is given by (recalibrated for the BeppoSAX MECS detector) $$\tau _{\mathrm{LC}}/\tau _{\mathrm{th}}=8.68\times e^{\zeta /0.59}+0.3=F(\zeta )$$ (7) In our case, $`\tau _{\mathrm{LC}}/\tau _{\mathrm{th}}=2.4`$. The loop length is given by the equivalent of Eq. (4) of Reale et al. (1997), i.e. $$L=\frac{\tau _{\mathrm{th}}\sqrt{0.233\times T_\mathrm{p}^{1.099}}}{3.7\times 10^4}$$ (8) where $`T_\mathrm{p}`$ is the measured peak flare temperature. Substituting the values determined for the Algol flare, i.e. $`\tau _{\mathrm{LC}}=49.6\pm 4.5`$ ks (determined from the peak down to the 10% level), $`\tau _{\mathrm{th}}=20.1`$ ks, $`T_\mathrm{p}=138`$ MK, the predicted loop length is $`L=8.2\times 10^{11}`$ cm ($`3.3R_\mathrm{K}`$), with a 90% uncertainty on the derived length is $`\pm 3.5\times 10^{11}`$ cm, yielding a formally allowed range of loop lengths of 4.7 to $`11.7\times 10^{11}`$ cm, i.e. 1.9 to $`4.7R_\mathrm{K}`$. The peak temperature at the loop apex is $`T_{\mathrm{max}}=200`$ MK. As expected for a flare with significant heating, the loop length derived is significantly shorter (a factor of 2 to 4) than the value derived through the quasi-static formalism under the assumption of no heating. No density estimate is produced directly by the $`EM`$ vs. $`T`$ slope analysis. A simple-minded density estimate can be derived by computing the loop’s volume (with a given assumed value of $`\alpha `$, typically $`0.1`$) and computing the peak density from the peak emission measure through the simple relationship $$n\sqrt{\frac{EM}{V}}\sqrt{\frac{EM}{2\pi \alpha ^2L^3}}$$ (9) The density corresponding to the range of loop lengths derived through the $`EM`$ vs. $`T`$ slope method is $`2.7\times 10^9/\alpha `$ to $`1.9\times 10^9/\alpha `$ cm<sup>-3</sup>. ## 6 Comparison with the flaring region size determined by eclipse analysis The lengths derived through the analysis of the flare decay range between $`4.7\times 10^{11}`$ cm (the lower end of the range of values allowed by the $`EM`$ vs. $`T`$ slope method) and $`28\times 10^{11}`$ cm (the longer value obtained with the quasi-static formalism); the corresponding loop heights (under the assumption of a simple, single-loop geometry) above the stellar surface range from $`1.2`$ to $`7`$ radii of the K star. The corresponding plasma densities are few times $`10^{10}`$ cm<sup>-3</sup>. How realistic are the long tenuous loops implied by the quasi-static decay-phase analysis? Unlike any previous stellar flare, we can in this case compare these values with the allowed volume for the flaring plasma derived from a geometrical analysis of the eclipse of the X-ray emission. As discussed in detail in Schmitt & Favata (1999), the observed total eclipse of the flaring plasma implies a maximum height of the flaring region of less than $`0.6`$ stellar radii, i.e. $`h0.6R_\mathrm{K}1.5\times 10^{11}`$ cm, equivalent to a maximum loop length (assuming a vertical loop with a simple geometry) of $`2.4\times 10^{11}`$ cm, significantly smaller than the values derived by the analysis of the quasi-static flare decay. The lower end of the allowed range for the loop lengths derived with the $`EM`$ vs. $`T`$ slope method is closer to the geometrical size derived through the eclipse analysis, but still marginally too large. An immediate consequence of the small volume implied by the eclipse is that the flaring plasma must have a higher density than predicted by the quasi-static analysis. The density is estimated by Schmitt & Favata (1999) at $`f^{0.5}\times 10^{11}`$ cm<sup>-3</sup>, where $`f`$ is the filling factor for the flaring plasma within the region allowed by the eclipse, $`f<1`$. The high density in turn implies a decay time for radiative cooling for the plasma of $`24f^{0.5}`$ ks, and a conductive decay time only $`2.5f^{0.5}`$ ks, with a resulting effective decay time, for any value of $`f`$, negligible when compared with the observed long decay time of the flare. Therefore, the observed decay must be driven almost entirely by the evolution of the energy dissipation responsible for the heating. In this framework, the observed irregularity of the observed flare decay is then not so much related to details of the decay process or of the flaring structure, but rather to the temporal evolution of the energy dissipation process. The large loop sizes derived from the quasi-static analysis of flare decay, although somewhat extreme, are not unusual: large loop lengths are a common result from the decay-phase analysis of large flares observed on all classes of coronal sources. For example, Schmitt (1994) shows that, using the quasi-static cooling paradigm, for the flare on the dMe star EV Lac observed during the ROSAT All Sky Survey the implied loop length is $`6\times 10^{11}`$ cm (i.e. $`10`$ stellar radii); recently, Tsuboi et al. (1998) reported the detection of a large flare on the pre-main sequence star V773 Tau, showing that the observed cooling time implies loop sizes of $`4\times 10^{11}`$ cm, or $`1.2`$ stellar radii; the very long ROSAT flare observed on the active binary CF Tuc (Kürster & Schmitt (1996)) implies (depending on the details of the assumptions) loop lengths of order $`2`$ to $`5\times 10^{11}`$ cm, again comparable to or larger than the stellar radius. Such large loop sizes are in contrast with the solar case, where, even for the strongest flares, the size of the flaring loops remains well below the solar radius. While the energy released for the stellar flares discussed here is several orders of magnitude larger than in the case of the strongest solar flares, and thus the solar analogy may not necessarily be fully relevant, the very large loops raise several difficulties related to the high implied magnetic fields at large distances from the stars, the heating mechanisms, etc. While the presence of long magnetic structures anchored on both stars could somewhat alleviate these concerns in the case of active binary systems (but not for Algol-type systems), this would not work in the case of single stars. Thus the physical meaning of the long loop sizes derived for spatially unresolved stellar flares from the quasi-static analysis of the loop decay appears questionable when compared with the geometric constraints on the size of the flaring region derived from the eclipse analysis. The present observations show that large flares can be produced by quite compact, high density structures, much smaller than implied by the quasi-static analysis of the loop decay phase or by simple considerations about the radiative decay time of an undisturbed plasma. Sustained heating, fully dominating the observed decay, must therefore, in such cases, be present. The heating mechanism must then not be impulsive only, but must be followed by a long-lasting tail, with time scales of order of tens of ks. The $`EM`$ vs. $`T`$ slope analysis in this case does not yield a very stringent limit to the size of the flaring loop (and it still appears to marginally overestimate the loop’s size), but provides clear support for the presence of long-lasting heating. The applications of the $`EM`$ vs. $`T`$ slope analysis in a situation in which the flare decay in the $`\mathrm{log}T`$$`\mathrm{log}\sqrt{EM}`$ plane is not linear is likely to result in larger uncertainties on the derived parameters. The flare shows clear evidence for a significant reheating episode (in time intervals 6 to 8), which alters the decay path, unlikely the hydrodynamic simulations which are used to derive the loop’s size, in which the heating is parameterized as a monotonically decreasing function of time. Note that if the same flare had been observed at a significantly lower $`S/N`$, such reheating episode would not have been resolved, but it would simply have resulted in a linear but shallower flare decay, biasing the resulting flare parameters. One alternative possibility is that the flare is the result of a different physical process that the single, constant-volume loop (a “compact” flare) which is assumed throughout the analysis. This could be, e.g. – similarly to the solar “two-ribbon” flares – an evolving loop arcade. Once more this would imply that different physical conditions compared to the hydrodynamic simulations of Reale et al. (1997), and thus larger uncertainties on the results. ### 6.1 Comparison with previous flares observed on Algol A few major flares have already been observed on Algol by a variety of X-ray telescopes. The physical parameters derived from the analysis of the decay phase are listed in Table 3. For comparison we also list, in Table 4, the parameter values derived, for the BeppoSAX flare, with the different approaches discussed above. ## 7 Conclusions The large Algol flare discussed here has several unique characteristics, already listed in the Introduction, which make it a unique event given the level of detailed constraints which can be derived on the flaring process. The most important conclusions are: * The variations of metallicity of the flaring plasma (which had already been hinted at in the study of previous X-ray flares) can be unambigously determined. The long duration of the flare and the good count statistics make it possible to determine the temporal evolution of the metal abundance, showing that it rises by a factor of three during the flare rise and decays again to the pre-flare value on time scales faster than any of the observed time scales (for the light curve, emission measure of temperature decay). Alternative explanations (as opposed to real abundance variation effects) for the observed changes in the spectrum, such as non-equilibrium effects, are difficult to reconcile with the plasma density for the flaring loop (a fortiori for the higher densities implied by the eclipse), which imply that the plasma would relax to equilibrium ionization conditions in a few tens of seconds at most. Fluorescence from the X-ray bombarded photosphere is in contrast with the lack of observed shift of the centroid of the Fe K line toward 6.4 keV and from the fact that the flare occurs in the permanently-occulted pole of the K-type star, so that the potentially fluorescing photosphere would in any case be largely self-eclipsed. * During the initial (rising) phase of the flare the best-fit absorbing column density is large ($`3\times 10^{21}`$ cm<sup>-2</sup>), and decays on time scales of some tens of ks. We interpret this as possibly associated with moving, cool absorbing material in the line of sight, i.e. a major coronal mass ejection associated with the flare’s onset. * The length derived for the flaring loop from the analysis of the flare decay, with the quasi-static formalism of van den Oord & Mewe (1989) is consistently too large when compared with the upper limit on the size of the flaring region imposed by the observed total eclipse (see Schmitt & Favata (1999)). Different assumptions result in a small range of derived loop half-lengths (from $`18`$ to $`28\times 10^{11}`$ cm), they are all well above the eclipse-derived upper limit of $`2.4\times 10^{11}`$ cm. Given the nature of the geometric constraints imposed by the eclipse observations, the conclusions is thus that the decay-derived loop lengths are too large, by a factor of at least a few times, and the derived densities are correspondingly lower. This may be relevant also for the interpretation of the very large loop sizes which have been determined with this method in the past for other large X-ray flares on other stars. * The range of loop lengths derived with the analysis method of Reale et al. (1997) imply the presence of significant sustained heating, in agreement with the conclusions drawn on the basis of the eclipse-derived size. The derived lengths are still marginally too large with respect to the eclipse-derived size; this is likely due to the application of the method in the presence of a significant reheating episode, which (when compared with hydrodynamical simulations with monotonically decreasing heating) will introduce additional uncertainties in the results. Nevertheless, even in such case, this method appears to yield more reliable information on the physical conditions of the flaring region than the quasi-static method. * As a consequence of the smaller loop length implied by the eclipse, sustained heating must be present throughout the flare, and it must actually be driving the decay. Given the small intrinsic thermodynamic decay time of the loop implied by the small size and corresponding high density, the loop has a small “thermal inertia”, and thus the heating time profile must essentially be the same as the observed X-ray luminosity profile (given in Fig. 8). ###### Acknowledgements. We would like to thank M. Guainazzi and D. Dal Fiume for their help in the analysis and understanding of the PDS data, and A. Parmar for the useful discussion on the LECS data. We are grateful to F. Reale for the several illuminating discussions on his methodology for the analysis of flare decays as well as for calibrating his method for the BeppoSAX MECS detectors. The BeppoSAX satellite is a joint Italian and Dutch program.
no-problem/9909/gr-qc9909059.html
ar5iv
text
# Brief Comments on “The Shapiro Conjecture, Prompt or Delayed Collapse ?” by Miller, Suen and Tobias \[ ## Abstract Recent numerical simulations address a conjecture by Shapiro that when two neutron stars collide head-on from rest at infinity, sufficient thermal pressure may be generated to support the hot remnant in quasi-static equilibrium against collapse prior to neutrino cooling. The conjecture is meant to apply even when the total remnant mass exceeds the maximum mass of a cold neutron star. One set of simulations seems to corroborate the conjecture, while another, involving higher mass progenitors each very close to the maximum mass, does not. In both cases the total mass of the remnant exceeds the maximum mass. We point out numerical subtleties in performing such simulations when the progenitors are near the maximum mass; they can explain why the simulations might have difficulty assessing the conjecture in such high-mass cases. \] Shapiro recently speculated that following the head-on collision of two neutron stars from rest at infinity, the two recoil shocks which propagate back into each star following contact might generate sufficient thermal pressure to hold up the remnant against gravitational collapse, at least until it cools via neutrino emission. This quasi-equilibrium state will last many dynamical timescales ($`\text{msec }`$) because the neutrinos, which eventually carry off the thermal energy, leak out slowly, on the neutrino diffusion timescale ($`10\text{sec}`$). The argument was independent of the total mass of the progenitors and applies even if the mass of the remnant greatly exceeds the maximum mass of a cold neutron star. A simple, analytic analysis using relativistic polytropes served to motivate the conjecture. The analysis assumed strict conservation of rest-mass and total mass-energy in the collision, as well as relaxation, following merger, to the same polytropic density profile in the remnant as in the progenitors. Given these assumptions it was possible to show that a stable equilibrium solution always exists for the remnant, independent of mass. In evaluating these assumptions, Shapiro argued that while they greatly oversimplify the collision in order to permit an analytic proof, they do provide a reasonable approximate description of the idealized head-on scenario under consideration. He did show that the loss of energy due to gravitational radiation, though small, would rule out an equilibrium solution for configurations arbitrarily close to the maximum mass, if the collision were not accompanied by loss of rest-mass. More significantly, he cautioned that a dynamical collision need not relax to a stable, equilibrium solution after all, even though one exists, but could instead overshoot the equilibrium state and collapse to black hole. But he put forward the possibility of delayed collapse as a plausible outcome of head-on collisions from rest at infinity and one which might be realized in numerical simulations. Numerical simulations to test this conjecture have been reported recently in Ref . The simulations treat colliding $`\mathrm{\Gamma }=2`$ polytropes in a 3+1 numerical scheme that solves Einstein’s field equations of general relativity coupled to the equations of relativistic hydrodynamics. The simulations employed $`\mathrm{\Gamma }=2`$ polytropic models that satisfy the TOV equilibrium equations for isolated, spherical stars. The stars are placed at finite separation, no more than $`3R`$ apart, where $`R`$ is the stellar radius. These configurations are then boosted towards each other with the Newtonian freefall velocity . No attempt is made to correct for the distortions in the initial matter density or pressure profiles that arise at such close separation due to tidal interactions. The simulations described in Ref deal with collisions of $`1.4M_{}`$ stars . We shall assume below that this quoted mass represents the total mass-energy of each of the progenitors, although no distinction is made in Ref between rest-mass $`M_0`$ and total mass-energy $`M`$, nor is there any discussion about how the mass in the colliding system is actually measured and monitored numerically. The subsquent evolution of this model produces a black hole a few dynamical timescales following contact and merger. This example, it is claimed in Ref , is in contradiction to the conjecture of Shapiro. Two aspects of the numerical simulations are crucial to note. One is that the maximum stable mass of the polytropic equation of state is $`1.46M_{}`$ in the units of Ref . The other, alluded to but not reported in Ref , is that a second set of simulations using the same equation of state for the head-on collision of $`0.8M_{}`$ configurations did not result in collapse following merger . In this case the simulations show that the merged remnant is supported in stable equilibrium by the thermal pressure generated by collision-induced shock heating, in apparent accord with the Shapiro conjecture. Interestingly, in both cases the total mass of the merged remnant exceeds the maximum mass of a cold star. Does the simulation with $`1.4M_{}`$ stars show unambiguously that a delayed collapse does not occur for stars of this mass? Insight into what could be happening may be gleaned from the original discussion in Ref . Section VI of that paper, “Discussion and Caveats ” already anticipated numerical subtleties that would be encountered in simulating a collision of stars that are initially at rest at infinity. To obtain reliable results, the requirements for computational accuracy become very stringent for high-mass stars near the maximum mass. The reason is clear from the analytic argument for the existence of a stable, equilibrium solution for the remnant. For such a solution to exist, it is necessary that the binding energy of the final remnant reside along the stable branch of the TOV equilibrium curve. The ratio $`M/M_0`$ measures the specific binding energy of a star according to $`E_{\mathrm{bind}}/M_0=1M/M_0`$. This ratio monotonically decreases with increasing central density $`\rho _c`$ along the stable branch of the TOV curve, whereas the mass and rest-mass monotonically increase along this branch. The turning point on the curve marks the onset of radial instability; there $`M`$ and $`M_0`$ asssume their maximum values and the binding energy ratio assumes its minimum value, $`(M/M_0)_{\mathrm{min}}`$. In the idealized scenario in which the stars begin from rest at infinity, strict conservation of mass and rest-mass imply that for the hot remnant $$M_{0,\mathrm{hot}}=2M_0\text{and}M_{\mathrm{hot}}=2M.$$ (1) As a result, the binding energy of the hot remnant is identical to the binding energy of each of the progenitor stars, and this guarantees the existence of an equilibrium solution (cf. Eqs. 8 - 10 of Ref ). Now consider an alternative scenario in which Eq. (1) is replaced by $$M_{0,\mathrm{hot}}=2M_0\text{and}M_{\mathrm{hot}}=2M(1f).$$ (2) As discussed in Ref , the fractional change in the total mass-energy $`f`$ might be the result of a true physical departure from the conditions or assumptions of the idealized collision scenario. The quantity $`f`$ may account, for example, for the energy loss due to gravitational radiation or for the release of the progenitor stars from rest at finite separation rather from infinity . Alternatively, $`f`$ might represent the degree of numerical error arising in a numerical simulation. Such error might be present due to imprecise initial data, including, for example, initial data which do not exactly correspond to the true solution at finite separation for two stars which actually begin from rest at infinity . Numerical error might also result from finite-difference integration error in the evolution due to finite spatial grid and time-step resolution, imprecise outer boundary conditions or outer boundary points that don’t extend sufficiently far into the radiation zone, global instabilities, etc . To illustrate, suppose we assume that for whatever reason, numerical error results in a spurious fractional decrease in $`M`$ in the course of the simulation. We then estimate the the maximum tolerable error $`f_{\mathrm{tol}}0`$ which still admits a stable, equilibrium solution for the remnant . According to Eq. (2), relaxation to a stable equilibrium state requires $$\frac{M_{\mathrm{hot}}}{M_{0,\mathrm{hot}}}=(1f)\frac{M}{M_0}\left(\frac{M}{M_0}\right)_{\mathrm{min}}f_{\mathrm{tol}}1\frac{\left(\frac{M}{M_0}\right)_{\mathrm{min}}}{\left(\frac{M}{M_0}\right)},$$ (3) Eqn. (3) demonstrates the difficutly of simulating accurately a head-on collision from rest at infinity as the progenitor masses approach the maximum mass. As $`MM_{\mathrm{max}}`$, we have $`(M/M_0)(M/M_0)_{\mathrm{min}}`$ and, hence $`f_{\mathrm{tol}}0`$. To appreciate the severity of the problem, consider the actual simulations performed for $`\mathrm{\Gamma }=2`$, for which $`(M/M_0)_{\mathrm{min}}=0.910`$ (see Fig 3 in Ref ). This minimum occurs at $`1.46M_{}`$, the maximum mass. We then find $`(M/M_0)=0.915`$ for the $`M=1.4M_{}`$ star and $`(M/M_0)=0.959`$ for the $`M=0.8M_{}`$ star. According to Eq. (3) these values imply $$f_{\mathrm{tol}}(0.8M_{})=0.05\mathrm{and}f_{\mathrm{tol}}(1.4M_{})=0.005.$$ (4) Eqn. (4) suggests that determining whether stable equilibrium state forms following the merger of two $`1.4M_{}`$ stars requires that the simulations be accurate to better than 0.5%. Considerably less accuracy is required as one moves away from the maximum mass; it is only 5% for the $`0.8M_{}`$ stars. Given the potential multiple sources of numerical error, from the initial data to the numerical integrations, it is by no means evident that existing $`3+1`$ numerical relativity codes can set up and track such a strong-field collision to 0.5% accuracy . The conjecture put forth in Ref that when two neutron stars collide head-on from rest at infinity, sufficient thermal pressure may be generated to support the hot remnant in quasi-static equilibrium against collapse may be generally correct, despite the report in Ref of a numerical example to the contrary. The simulation cited involves the collision of high-mass stars near the maximum mass , which requires very high numerical accuracy to simulate reliably . In the unreported case involving lower-mass stars whose combined mass still exceeds the maximum mass, the conjecture appears to be corroborated. The accuracy requirements for the lower-mass case are far less stringent, and presumably more easily attainable. Evaluating the conjecture at high mass may have to await more sophisticated code development and/or larger machines . Even with such advances, evaluating the conjecture in this domain may still require a careful limiting proceedure based on a sequence of runs with increasing initial separation between the initial stars to be confident that the initial data correctly corresponds to configurations which infall from rest at infinity. The $`0.8M_{}`$ simulation involves stars that have masses far below the maximum mass. As a result, this simulation is likely to be more reliable. If the delayed collapse outcome found for this collision is indeed correct, then we can already speculate that for neutron stars governed by realistic equations of state , which currently give maximum masses in the range $`1.8M_{}2.3M_{}`$, the collision of two realistic (low-mass) $`1.4M_{}`$ stars from rest at infinity likely will lead to delayed collapse in accord with the conjecture. ###### Acknowledgements. We thank T. Baumgarte, F. Lamb, M. Shibata and W.M. Suen for useful discussions. This work has been supported in part by NSF Grants AST 96-18524 and PHY 99-02833 and NASA Grants NAG5-7152 and NAG5-8418 to the University of Illinois at Urbana-Champaign.
no-problem/9909/nucl-ex9909003.html
ar5iv
text
# A systematic study of two particle correlations from NA49 at CERN SPS ## Bose Einstein Interferometry The aim of studying two particle correlations is the reconstruction of the freeze out conditions of final state particles. In relativistic heavy ion collisions it appears that this state has to be described by a set of rather complicated non-isotropic, non-static and particle-type dependent emission functions. To constrain these one has to gather selective information over a wide region of phase space by applying different analysis methods and by a systematic change of the initial condition of the collision. The NA49 experiment with its four large volume TPCs detecting over 60% of all pions emitted has collected rather a comprehensive body of data. It comprises centrality selected Pb$`+`$Pb collisions at 158 and at 40 AGeV, minimium bias Pb$`+`$Pb events at 158 AGeV as well as 158 AGeV p$`+`$p and p$`+`$Pb interactions. Overall this represents an ideal basis for such a study. Usually freeze out condition are investigated by means of intensity interferometry which exploits correlations at small momentum differences $`Q`$ of two particles arising from quantum interference in the case of the two particles being indistinguishable. The description of the correlation function $`C_2(Q)`$ by a gaussian parametrization in the three components of the momentum difference $`Q_{\mathrm{side}}`$, $`Q_{\mathrm{out}}`$, $`Q_{\mathrm{long}}`$ relates the gaussian width parameters to the transverse ($`R_{\mathrm{side}}`$) longitudinal ($`R_{\mathrm{long}}`$) and temporal ($`R_{\mathrm{diff}}^2=R_{\mathrm{out}}^2R_{\mathrm{side}}^2`$) extent of the source at freeze out. In order to observe the mere quantum statistical interference of like sign charged pions the correlation function has to be corrected for correlations owing to final state interactions mainly due to the Coulomb repulsion. For our central Pb$`+`$Pb 158 AGeV data this correction is done – as described in – by the measured correlation function of unlike sign charged pairs. In the case of the minimum bias data sample as well as in case of the central 40 AGeV data analysis the analytical ansatz of the Coulomb correlation, as suggested in , is chosen instead. It was incorporated in the fitting procedure of the uncorrected correlation function, to take the size dependence of the Coulomb effect properly into account. Despite the two different methods the results obtained from the central 158 AGeV data sample coincide with those from the most central of five bins in the minimum bias 158 AGeV data. Due to the expected small source size the usual Gamov factor is applied in case of p$`+`$p and p$`+`$Pb collisions to correct for Coulomb correlations, noting that compared to the uncorrected case no significant change of the radii is observed. To eliminate the inefficiency of the NA49 TPCs for detecting pairs of two close tracks, those with distances of less than 2 cm in the Pb$`+`$Pb analysis or those with relative laboratoy momenta $`Q_{\mathrm{x},\mathrm{y}}<20`$ MeV/c and $`Q_\mathrm{z}<100`$ MeV/c (z beam axis) in the p$`+`$p and p$`+`$Pb have been excluded from the analysis. This restriction is imposed on pairs of particles from the same event (corr) as well as on those combined from different events (mix). The latter are used as reference sample (denominator) in the correlation function $`C_2=N_{\mathrm{corr}}^{\mathrm{pair}}/N_{\mathrm{mix}}^{\mathrm{pair}}`$. All results presented here refer to the Longitudinal Co-Moving System (LCMS) as the Lorentz frame, which is determined for each pair in such a way that the sum of the longitudinal momentum components of the pair vanishes. Results of the analysis of the p$`+`$p collisions are shown in the two left columns of figure 1 together with results from the analysis of central (CMD$``$7)<sup>1</sup><sup>1</sup>1For a definition of CMD see the contribution to this conference. and peripheral (CMD$`<`$7) p$`+`$Pb collisions. The phase space is subdivided into two regions of the average rapidity $`y_{\pi \pi }`$ of the two pions at $`y_{\pi \pi }=3.9`$ and into two of the average pion transverse momentum $`k_\mathrm{t}`$ at $`k_\mathrm{t}=0.25`$ GeV$`/`$c. In each column of figure 1 $`<y_{\pi \pi }>`$ is the average over all pairs contributing to the respective region. Similarily $`<k_\mathrm{t}>`$ are used as abscissae. To increase the statistical significance the distributions from pairs of negative hadrons are added to those of positive hadrons since the separate analyses agree within errorbars. It should be emphasized that for these collision systems this is the first time such a phase space dependent three dimensional analysis has become feasible. The radii from p$`+`$Pb tend to be slightly larger than those from p$`+`$p. In all cases an ordering is observed $`R_{\mathrm{long}}>`$$`R_{\mathrm{out}}>`$$`R_{\mathrm{side}}`$, the latter being consistent with the observation of a finite duration of freeze-out. Moreover, the p$`+`$p radii at large values of k<sub>t</sub> appear to be smaller than those at low k<sub>t</sub>. Whereas in central Pb$`+`$Pb reactions (two right columns of figure 1) such a behaviour of the transverse radii is attributed to a hydrodynamical expansion of the system in p$`+`$p it seems more likely to be a characteristics of the decay pattern of resonances such as the $`\rho `$ which play an important role at this shorter length scale. Complementary to the change of the collision system is the variation in centrality of the collision. In the right column of figure 2 the dependence on the number $`n_{\mathrm{part}}`$ of nucleons participating in the collision is shown for Pb$`+`$Pb collisions at different centralities accompanied by our p$`+`$p result and by the NA35 result in the system S$`+`$S at 200 AGeV. With increasing $`n_{\mathrm{part}}`$ the transverse radii $`R_{\mathrm{out}}`$ and $`R_{\mathrm{side}}`$ grow continously whereas $`R_{\mathrm{long}}`$ appears constant. $`R_{\mathrm{diff}}^2`$ is positive over the full range and $`R_{\mathrm{diff}}`$ grows linearly with $`n_{\mathrm{part}}`$. Similar to the findings of NA35 RQMD gives a good agreement for $`R_{\mathrm{side}}`$ but overestimates $`R_{\mathrm{out}}`$ and $`R_{\mathrm{long}}`$. A comparison of $`R_{\mathrm{side}}`$ with an estimate of the geometrical transverse size of the overlap region in the collsions (figure 2 line<sup>2</sup><sup>2</sup>2The line in $`R_{\mathrm{side}}`$ corresponds to the average distance of points in the overlap region of two spheres of radius $`1.16A^{1/3}`$ offset by the impact paramter from the center of the the collision, projected into a randomly oriented reaction plane.) demonstrates that the interferometric radii reflect the freeze-out stage which is preceeded by a strong expansion of the dense collision zone produced shortly after the collision . The left column of figure 2 shows a first step towards closing the gap between the highest AGS energy at 11.6 AGeV and the SPS measurements at 158 AGeV by the (very preliminary) results from the 40 AGeV commissioning run of NA49 in 1998. Although the 40 AGeV and the 158 AGeV results are taken at $`\frac{<y_{\pi \pi }>y_{\mathrm{cms}}}{y_{\mathrm{cms}}}1.0`$ whereas the AGS point is measured at mid-rapidity, the comparison is justified by the observed slow variation of the the radii with $`y_{\pi \pi }`$ in NA49. In the transition from AGS to the highest SPS energy $`R_{\mathrm{long}}`$ more than doubles whereas a small increase in the transverse radii $`R_{\mathrm{out}}`$ and $`R_{\mathrm{side}}`$ is observed and the temporal component $`R_{\mathrm{diff}}`$ even seems to be a constant. ## Correlations of Non Identical Particles A method for investigating space-time emission asymmetries has been suggested in , which utilizes the effects of final state interaction on the correlations of two non identical particles at small relative velocities. Contrary to the BE correlation method it gives access to the space-time asymmetries in the emission function of different particle species. Such an analysis was carried out for the first time in the SPS energy regime and is presented here for the case of proton-pion correlations. The method involves the correlation functions $`C_+(Q_{\mathrm{inv}})`$ and $`C_{}(Q_{\mathrm{inv}})`$ derived for the two cases where $`\mathrm{cos}\mathrm{\Psi }>0`$ and $`\mathrm{cos}\mathrm{\Psi }<0`$, respectively, with $`\mathrm{\Psi }`$ being defined as the angle between the pair velocity sum-vector and the velocity difference-vector in the rest frame of the pair and $`Q_{\mathrm{inv}}`$ redefined as the momentum difference in the pair rest frame<sup>3</sup><sup>3</sup>3In this frame $`Q_{\mathrm{inv}}`$ is equal to the relative velocity times the reduced mass of the pair. For pairs of equal mass particles the definition coincides with the usual definition of $`Q_{\mathrm{inv}}`$.. A deviation of the ratio $`\frac{C_+}{C_{}}(Q_{\mathrm{inv}})`$ from unity then allows – besides a more quantitative evaluation – for a distinction of emission patterns, for which on the one hand (I) low $`p_t`$ pions are emitted later or/and closer to the reaction axis and on the other hand (II) low $`p_t`$ pions are emitted earlier or/and further from the reaction axis than high $`p_t`$ protons. Results for a small subset of NA49 data of about 40.000 central Pb$`+`$Pb events are shown in figure 3. The data clearly exhibit asymmetries in the space-time emission favouring scenario I of the above two. Moreover, a good agreement is achieved in comparison to RQMD 2.3, in which the spatial component plays the dominant role in the occurance of the asymmetry observed for the model.
no-problem/9909/chao-dyn9909003.html
ar5iv
text
# About coherent structures in random shell models for passive scalar advection \[ ## Abstract A study of anomalous scaling in models of passive scalar advection in terms of singular coherent structures is proposed. The stochastic dynamical system considered is a shell model reformulation of Kraichnan model. We extend the method introduced in to the calculation of self-similar instantons and we show how such objects, being the most singular events, are appropriate to capture asymptotic scaling properties of the scalar field. Preliminary results concerning the statistical weight of fluctuations around these optimal configurations are also presented. \] Experimental and numerical investigation of passive scalars advected by turbulent flows have shown that passive scalar structure functions, $`T_{2n}(r)`$ have an anomalous power law behaviour : $`T_{2n}(r)=(\theta (x+r)\theta (x))^{2n}=(\delta _r\theta (x))^{2n}r^{\zeta (2n)}`$, where for anomalous scaling we mean that the exponents $`\zeta (2n)`$ do not follow the dimensional estimate $`\zeta (2n)=n\zeta (2)`$. A great theoretical challenge is to develop a theory which allows a systematic calculation of $`\zeta (n)`$ from the Navier-Stokes equations. Recently , it has been realized that intermittent power laws are also present in a model of passive scalar advected by stochastic velocity fields, for $`n>1`$ . The model, introduced by Kraichnan, is defined by the standard advection equation: $$_t\theta +𝐮\mathbf{}\theta =\kappa \mathrm{\Delta }\theta +\varphi ,$$ (1) where $`𝐮`$ is a Gaussian, isotropic, white-in-time stochastic $`d`$-dimensional field with a scaling second order structure function: $`(u_i(x)u_i(x+r))(u_j(x)u_j(x+r))=D_0r^\xi ((d+\xi 1)\delta _{ij}\xi r_ir_j/r^2)`$. The physical range for the scaling parameter of the velocity field is $`0\xi 2`$, $`\varphi `$ is an external forcing and $`\kappa `$ is the molecular diffusivity. A huge amount of work has been done in the last years on the Kraichnan model. Due to the white-in-time character of the advecting velocity field, the equation for passive correlators of any order $`n`$ are linear and closed. This allows explicit, perturbative calculations of anomalous exponents in terms of zero-mode solutions of the closed equation satisfied by $`n`$-points correlation function, by means of developments in $`\xi `$ or in $`1/d`$ , with $`d`$ the physical space dimensionality. The connection between anomalous scaling and zero modes, if fascinating from one side, looks very difficult to be useful for the most important problem of Navier-Stokes eqs. In that case, being the problem non linear, the hierarchy of equations of motion for velocity correlators is not closed and the zero-mode approach should be pursued in a much less handable functional space. From a phenomenological point of view, a simple way to understand the presence of anomalous scaling is to think at the scalar field as made of singular scaling fluctuations $`\delta _r\theta (x)r^{h(x)}`$, with a probability to develop an $`h`$-fluctuation at scale $`r`$ given by $`P_r(h)r^{f(h)}`$, being $`f(h)`$ the co-dimension of the fractal set where $`h(x)=h`$. This is the multifractal road to anomalous exponents that leads to the usual saddle-point estimate for the scaling exponents of structure functions: $`\zeta (2n)=\mathrm{min}_h(2nh+f(h))`$ . In this framework, high order structure functions are dominated by the most intense events, i.e. fluctuation characterized by an exponent $`h_{min}`$: $`lim_n\mathrm{}\zeta (n)=nh_{min}`$. The emergence of singular fluctuations, at the basis of the multifractal interpretation, naturally suggests that instantonic calculus can be used to study such special configurations in the system. Recently, instantons have been successfully applied in the Kraichnan model to estimate the behaviour of high-order structure functions when $`d(2\xi )1`$ , and to estimate PDF tails for $`\xi =2`$ . In this letter, we propose an application of the instantonic approach in random shell models for passive scalar advection, where explicit calculation of the singular coherent structures can be performed. Let us briefly summarized our strategy and our main findings. First, we restrict our hunt for instantons to coupled, self-similar, configurations of noise and passive scalar, a plausible assumption in view of the multifractal picture described above. We develop a method for computing in a numerical but exact way such configurations of optimal Gaussian weight for any scaling exponent $`h`$. We find that $`h`$ cannot go below some finite threshold $`h_{min}(\xi )`$. We compare $`h_{min}(\xi )`$ at varying $`\xi `$ given from the instantonic calculus with those extracted from numerical simulation showing that the agreement is perfect and therefore supporting the idea that self-similar structures gouvern high-order intermittency. Second, assuming that these localized pulse-like instantons constitute the elementary bricks of intermittency also for finite-order moments we compute their dressing by quadratic fluctuations. We obtain in this way the first two terms of the function $`f(h)`$ via a “semi-classical” expansion. Let us notice that a rigorous application of the semi-classical analysis would demand for a small parameter controlling the rate of convergence of the expansion, like $`1/n`$ where $`n`$ is the order of the moment or $`1/d`$, where $`d`$ is the physical space dimension. As we do not dispose of such small parameter in our problem, the reliability of our results concerning the statistical weight of the $`h`$-pulses can only be checked from an a posteriori comparison with numerical data existing in literature. At the end of this communication, we will present some preliminary results on such important issue, while much more extensive work will be reported elsewhere. Shell models are simplified dynamical models which have demonstrated in the past to be able to reproduce many of the most important features of both velocity and passive turbulent cascades . The model we are going to use is defined as follows. First, a shell-discretization of the Fourier space in a set of wavenumbers defined on a geometric progression $`k_m=k_0\lambda ^m`$ is introduced. Then, passive increments at scale $`r_m=k_m^1`$ are described by a real variable $`\theta _m(t)`$. The time evolution is obtained according to the following criteria : (i) the linear term is purely diffusive and is given by $`\kappa k_m^2\theta _m`$; (ii) the advection term is a combination of the form $`k_m\theta _m^{}u_{m^{\prime \prime }}`$, where $`u_m`$ are random Gaussian and white-in-time shell-velocity fields; (iii) interacting shells are restricted to nearest-neighbors of $`m`$; (iv) in the absence of forcing and damping, the model conserves the volume in the phase-space and the energy $`E=_m|\theta _m|^2`$. Properties (i), (ii) and and (iv) are valid also for the original equation (1) in the Fourier space, while property (iii) is an assumption of locality of interactions among modes, which is rather well founded as long as $`0\xi 2`$. The simplest model exhibiting inertial-range intermittency is defined by : $`[{\displaystyle \frac{d}{dt}}+\kappa k_m^2]\theta _m(t)=c_m\theta _{m1}(t)u_m(t)+`$ (2) $`+a_m\theta _{m1}(t)u_{m1}(t)+\delta _{1m}\varphi (t),`$ (3) with $`a_m=c_{m1}=k_m`$, and where the forcing term acts only on the first shell. Following Kraichnan, we also assume that the forcing term $`\varphi (t)`$ and the velocity variables $`u_m(t)`$ are independent Gaussian and white-in-time random variables, with the following scaling prescription for the advecting field: $$u_m(t)u_n(t^{})=\delta (tt^{})k_m^\xi \delta _{mn}.$$ (4) Shell models have been proved analytically and non-perturbatively to possess anomalous zero modes similarly to the original Kraichnan model (1). The role played by fluctuations with local exponent $`h(x)`$ in the original physical space model is here replaced by the formation at larger scale of structures propagating self-similarly towards smaller scales. The existence in the inviscid unforced problem of such solutions associated with the appearance of finite time singularities is a The analytical resolution of the instantonic problem even in the simplified case of shell models is a hard task. In , a numerical method to select self-similar instantons in the case of a shell model for turbulence, has been introduced. In the following, we are going to apply a similar method to our case. We rewrite model (3) in a more concise form: $$\frac{d𝜽}{dt}=\mathrm{M}[𝐛]𝜽.$$ (5) The scalar and velocity gradient vectors, $`𝜽`$ and $`𝐛`$, are made from the variables $`\theta _m`$ and $`k_mu_m`$. As far as inertial scaling is concerned, we expect that some strong universality properties apply with respect to the large scale forcing. Indeed, forcing changes only the probability with which a pulse appears at large scale, but not its inertial range scaling behaviour, $`P_{k_m}(h)k_m^{f(h)}`$. So, as we are interested only in the evaluation of $`f(h)`$, we drop the forcing and dissipation in (5). The matrix $`\mathrm{M}[𝐛]`$ is linear in $`𝐛`$ and can be obviously deduced from (3). The stochastic multiplicative equation (5) must be interpreted à la Stratonovich. Nevertheless, once the Ito-prescription for time discretization is adopted, the dynamics gets Markovian and a path integral formulation can then be easily implemented. This changes (5) into: $$\frac{d𝜽}{dt}=B\mathrm{D}𝜽+\mathrm{M}[𝐛]𝜽,$$ (6) where $`\mathrm{D}`$ is a diagonal matrix (Ito-drift) $`\mathrm{D}_{mm}=k_m^{2\xi }`$, and $`B`$ is a positive constant. As we said before, we are looking for coherent structures developing a scaling law $`\theta _mk_m^h`$ as they propagate towards small scales in the presence of a velocity realization of optimal Gaussian weight. The probability to go from one point to another in configuration space (spanned by $`𝜽`$) between times $`t_i`$ and $`t_f`$ can be written quite generally as a path integral over the three fields $`𝐛`$, $`𝜽`$, $`𝒑`$ of the exponential $`e^{S[𝐛,𝜽,𝒑]}=e^{_{t_i}^{t_f}[𝐛,𝜽,𝒑]𝑑t}`$, where the Lagrangian $``$ is given by the equation: $$(𝐛,𝜽,𝒑)=\frac{1}{2}𝐛.\mathrm{D}^1𝐛+𝒑.(\frac{d𝜽}{dt}+B\mathrm{D}𝜽\mathrm{M}[𝐛]𝜽),$$ (7) and $`𝒑`$ is an auxiliary field conjugated to $`𝜽`$ which enforces the equation of motion (6). The minimization of the effective action $`S`$ leads to the following coupled equations: $`{\displaystyle \frac{d𝜽}{dt}}`$ $`=`$ $`B\mathrm{D}𝜽+\mathrm{M}[𝐛]𝜽,`$ (8) $`{\displaystyle \frac{d𝒑}{dt}}`$ $`=`$ $`B\mathrm{D}𝒑^t\mathrm{M}[𝐛]𝒑,`$ (9) with the self-consistency condition for $`𝐛`$: $$𝐛=\mathrm{D}^t\mathrm{N}[𝜽]𝒑,$$ (10) where the matrix $`\mathrm{N}[𝜽]`$ is defined implicitly through the relation $`\mathrm{N}[𝜽]𝐛=\mathrm{M}[𝐛]𝜽`$. We are now able to predict the scaling dependence of variables $`b_m`$. For a truly self-similar propagation, the cost in action per each step along the cascade must be constant. The characteristic turn-over time required by a pulse localized on the $`mth`$ shell to move to the next one can be dimensionally estimated as $`1/(u_mk_m)b_m^1`$. Recalling the scaling dependence of $`\mathrm{D}`$ and the definition of action (7), we expect: $`\mathrm{\Delta }S=_{t_m}^{t_{m+1}}𝑑tk_m^{(2\xi )}b_m`$. We can thus deduce that $`b_mk_m^{2\xi }`$. Let us now discuss how to explicitly find solutions of the above system of equations. Clearly, there is no hope to analytically find the exact solutions of these deterministic non linear coupled equations. Also numerically, the problem is quite delicate, because (8) and (9) are obviously dual of each other and have opposite dynamical stability properties. This phenomenon can be hardly captured by a direct time integration. To overcome this obstacle, in it has been proposed a general alternative scheme which adopts an iterative procedure. For a given configuration of the noise, each step consists in integrating the dynamics of the passive scalar (8) forward in time to let emerge the solution of optimal growth. Conversely, the dual dynamics of the auxiliary field (9) is integrated backward in time, along the direction of minimal growth in agreement with the prediction deduced from (10): $`𝒑𝜽^1`$. Then the noise $`𝐛`$ can be recomputed by the self-consistency equation (10) and the process is repeated until the convergence is reached. Self-similar passive solutions must be triggered by self-similar noise configuration: $$b_m(t)=\frac{1}{(t^{}t)}F(k_m^{2\xi }(t^{}t)),$$ (11) where $`t^{}`$ is the critical time at which a self-similar solution reaches infinitesimally small scales in absence of dissipation. To overcome the non-homogeneity of time evolution seen by these accelerating pulses, we introduce a new time variable $`\tau =\mathrm{log}(t^{}t)`$. Then, the advecting self similar velocity field (11) can be rewritten under the form: $`𝐛(\tau )=e^\tau 𝐂(\tau )`$ where $`C_m(\tau )`$ is still the velocity gradient field, but expressed in a different time scale, such that: $`C_m(\tau )=F(m(2\xi )\mathrm{log}\lambda \tau )`$. The sought self-similar solutions appear in this representation as traveling waves, whose period $`T=(2\xi )\mathrm{log}\lambda `$ is fixed by the scaling consideration reported above. In this way, we can limit the search of solutions on the time interval \[$`0T`$\], and the action at the final time $`t_f=mT`$ is deduced by $`S(t_f)=mS(T)`$. Then comes the main point of our algorithm. For a fixed noise configuration $`𝐂`$, the field $`𝜽`$ must be the eigenvector associated to the maximal (in absolute value) Lyapunov exponent $`\sigma _{max}`$ of the Floquet evolution operator: $$U(T;0)=𝒯_1\mathrm{exp}_0^T(B\mathrm{D}e^\tau +\mathrm{M}[𝐂(\tau )])𝑑\tau .$$ (12) Here $`𝒯_1`$ denotes the translation operator by one unit to the left along the lattice. Similarly, the auxiliary field must be the eigenvector associated with the Lyapunov exponent $`\sigma _{max}`$ of the inverse dual operator $`{}_{}{}^{t}U_{}^{1}`$. Starting from an initial arbitrary traveling wave shape for $`𝐂(\tau )`$ with period $`T`$, we have computed the passive scalar and its conjugate fields at any time between $`0`$ and $`T`$, by diagonalization of operator $`U`$, recomputed the velocity gradient field $`𝐂(\tau )`$ from the self-consistency equation (10) and iterated this procedure until an asymptotic stable state, $`𝜽^\mathrm{𝟎}`$, $`𝒑^\mathrm{𝟎}`$, $`𝐂^\mathrm{𝟎}`$, was reached. The scaling exponent $`\theta _mk_m^h`$ for the passive scalar can be deduced by $`\theta _m^0(h)e^{m\sigma _{max}T}`$, so that $`h=(\xi 2)\sigma _{max}`$. Note that $`h`$ is bound to be positive due to the conservation of energy. In our algorithm, the norm of the gradient velocity field $`𝐂(0)`$ acts as the unique control parameter in a one to one correspondence with $`h`$. The action $`S^0(h)`$ is, in multifractal language, nothing but the first estimate of $`f(h)`$ curve based only on the contribution of all pulse-like solutions, more precisely $`f(h)=S^0(h)/ln\lambda `$. We now turn to the presentation and discussion of our main result. By varying the control parameter, we obtain a continuum of exponents in the range $`h_{min}(\xi )hh_{max}(\xi )`$. The simple analysis of the $`h`$-spectrum allows predictions only for observable which do not depend on the $`f(h)`$ curve, i.e. only on the scaling of $`T_{2n}`$ for $`n\mathrm{}`$, ($`\zeta (n)h_{min}n`$ for $`n`$ large enough). Unfortunately, high order exponents are the most difficult quantities to be extracted from numerical or experimental data. Nevertheless, thanks to the extreme simplicity of shell models, very accurate numerical simulations have been done at different values of $`\xi `$ and in some cases a safe upper bound prediction on the asymptotic of $`\zeta (n)`$ exponents could be extracted. To compare our results with the numerical data existing in literature, we have analyzed the shell-model version of passive advection proposed in . In Fig.1, we show the $`h_{min}`$ curve obtained at various $`\xi `$ from instantonic calculation, together with the $`h_{min}^{num}`$ values extracted from direct numerical simulation of the quoted model performed at two different values of $`\xi `$: the agreement is good. Our calculation predicts, within numerical errors, the existence of a critical $`\xi _c1.75`$ above which the minimal exponent reaches the lowest bound $`h_{min}=0`$. This goes under the name of saturation and it is the signature of the presence of discontinuous-like solutions in the physical space $`\delta _r\theta r^0`$. Theoretical and numerical results suggest the existence of such effect in the Kraichnan model for any value of $`\xi `$. The existence of saturation in this last is due to typical real-space effects and therefore it is not surprising that there in not a complete quantitative analogy with the shell-model case. Let us now present the other -preliminary- result, i.e. the role played by instantons for finite-order structure functions. If we just keep the zero-th order approximation for $`f(h)=S_0(h)/\mathrm{log}\lambda `$, we get the $`\zeta _n`$ curve shown in Fig.2, which is quite far from the numerical results of (the asymptotic linear behavior is in fact not even reached in the range of $`n`$ represented on the figure). In order to get a better assessment of the true statistical weight of the optimal solutions, we computed the next to leading order term in a “semi-classical” expansion. Fluctuations around the action were developed to quadratic order with respect to $`𝜽^\mathrm{𝟎}`$, $`𝒑^\mathrm{𝟎}`$, $`𝐂^\mathrm{𝟎}`$, and the summation over all perturbed trajectories leading to the same effective scaling exponent for the $`𝜽`$ field after $`m`$ cascade steps was performed. It turns out (see Fig.2) that the contribution in the action of quadratic fluctuations, $`S_1(h)`$, greatly improves the evaluation of $`\zeta (n)`$. Naturally, in the absence of any small parameter in the problem, we cannot take for granted that the next correction(s) would not spoil this rather nice agreement with numerical data. But the surprising fact that $`S_0+S_1`$ is strongly reduced with respect to $`S_0`$, even for the most intense events, does not imply by itself a lack of consistency of our computation. In any case, the prediction of the asymptotic slope of the $`\zeta _n`$ curve, based on the value $`h_{min}`$ is obviously valid beyond all orders of perturbation. Moreover, for values of $`\xi >1`$, we find that the second order exponent extracted from our calculation is in good agreement the exact result $`\zeta _2=2\xi `$, suggesting that our approach is able to give relevant statistical information also on not too intense fluctuations. In conclusion, we have presented an application of the semi-classical approach in the framework of shell models for random advection of passive scalar fields. Instantons are calculated through a numerically assisted method solving the equations coming from probability extrema: the algorithm has revealed capable to pick up those configurations giving the main contributions to high order moments. Of course, we are far from having a systematic, under analytical control, approach to calculate anomalous exponents in this class of models. Nevertheless, the encouraging results here presented raise some relevant questions which go well beyond the realm of shell-models. To quote just one, we still lack a full comprehension of the connection between the usual multiplicative-random process and the instantonic approaches to multifractality: in particular, it is not clear what would be the prediction for multi-scale and multi-time correlations of the kind discussed in within the instantonic formulation. It is a pleasure to thank J-L. Gilson and P. Muratore-Ginanneschi for many useful discussions on the subject. LB has been partially supported by INFM (PRA-TURBO) and by the EU contract FMRX-CT98-0175.
no-problem/9909/hep-th9909035.html
ar5iv
text
# References UOSTP-99-007 SNUTP-99-042 KIAS-P99078 hep-th/9909035 Comments on the Moduli Dynamics of 1/4 BPS Dyons Dongsu Bak <sup>a</sup><sup>1</sup><sup>1</sup>1Electronic Mail: dsbak@mach.uos.ac.kr and Kimyeong Lee <sup>bc</sup><sup>2</sup><sup>2</sup>2Electronic Mail: klee@kias.re.kr <sup>a</sup> Physics Department, University of Seoul, Seoul 130-743, Korea <sup>b</sup> Physics Department and Center for Theoretical Physics Seoul National University, Seoul 151-742, Korea <sup>c</sup> School of Physics, Korea Institute for Advanced Study 207-43, Cheongryangryi-Dong, Dongdaemun-Gu, Seoul 130-012, Korea<sup>3</sup><sup>3</sup>3address after 9-01-1999 > We rederive the nonrelativistic Lagrangian for the low energy dynamics of 1/4 BPS dyons by considering the time dependent fluctuations around classical 1/4 BPS configurations. The relevant fluctuations are the zero modes of the underlying 1/2 BPS monopoles. Recently the 1/4 BPS dyonic configurations are constructed and their nature has been exploited in the $`𝒩=4`$ supersymmetric Yang-Mills theories . Since the supersymmetric Yang-Mills theories arise as a low energy description of parallel D3 branes in the type IIB string theory , the quantum 1/4 BPS states have the string interpretation as multi-pronged string . In the classical field theory, the 1/4 BPS configurations can be viewed as a collection of 1/2 BPS dyons positioned with respect to each other so that a balance of the Coulomb and Higgs forces is achieved. The BPS equations satisfied by the classical 1/4 BPS configurations consist of the 1/2 BPS monopole equation and its gauge zero mode equation. The underlying 1/2 BPS configurations are uniquely determined by the moduli coordinates, which in turn determine the solution of the second BPS equation uniquely . The low energy dynamics of 1/4 BPS monopoles has been explored in Ref. and it is shown that a specific potential is required in addition to the kinetic terms over the moduli space. The basic ideas of the construction were as follows. In the limit where 1/4 BPS configurations are almost 1/2 BPS, it should be possible to rediscover the physics of 1/4 BPS configurations from the zero mode dynamics of 1/2 BPS configurations. Since static forces exist between 1/2 BPS solitons in the case of misaligned vacua , the simplest possibility is to add a potential term to the moduli space dynamics. The potential is indeed uniquely determined from the given knowledge of the electric charge and mass of the 1/4 BPS states. Here the result by Tong was particularly useful . The low energy Lagrangian has a BPS bound and its BPS configuration corresponds to the 1/4 BPS field configuration . For a simple case, quantum 1/4 BPS states of the corresponding supersymmetric Lagrangian have been found in Ref. . However, the derivation of the low energy dynamics was in some sense indirect. Even though the presence of the potential is obvious by considering the interaction between point particle dyons, the exact structure of the potential cannot be obtained from the particle point of view. In this note, we rederive the low energy Lagrangian for 1/4 BPS configurations by the field theoretic method. The dynamical variables are the zero modes, or the moduli of the underling 1/2 BPS configurations. We begin with the $`𝒩=4`$ supersymmetric Yang-Mills theory. We choose the compact semisimple group $`G`$ of the rank $`r`$. Among the six Higgs fields, only two Higgs fields $`a,b`$ play the role in the BPS bound. The bosonic part of the Lagrangian is given by $$L=\frac{1}{2}d^3x\mathrm{tr}\left\{𝐄^2𝐁^2+(D_0a)^2(𝐃a)^2+(D_0b)^2(𝐃b)^2\left(i[a,b]\right)^2\right\},$$ (1) where $`D_0=(_0iA_0)`$, $`𝐃=i𝐀`$, and $`𝐄=_0𝐀\mathrm{𝐃𝐀}_\mathrm{𝟎}`$. The four vector potential $`(A_0,𝐀)=(A_0^aT^a,𝐀^aT^a)`$ and the group generators $`T^a`$ are traceless hermitian matrices such that $`\mathrm{tr}T^aT^b=\delta ^{ab}`$. As shown in Ref. , there is a BPS bound on the energy functional, which is saturated when configurations satisfy $`𝐁=𝐃b,`$ (2) $`𝐄=𝐃a,`$ (3) $`D_0bi[a,b]=0,`$ (4) $`D_0a=0,`$ (5) together with the Gauss law, $$𝐃𝐄i[b,D_0b]i[a,D_0a]=0.$$ (6) Equation (2) is the old BPS equation for 1/2 BPS monopoles and is called the primary BPS equation. Equations (3), (4), (5), and (6) can be put together into a single equation, $$𝐃^2a[b,[b,a]]=0,$$ (7) which is called the secondary BPS equation. This equation is the global gauge zero mode equation for the first BPS equation. We can choose the gauge where $`A_0=a`$, in which case the configuration itself becomes static in time. In the asymptotic region, two Higgs fields take the form $`b𝐛𝐇{\displaystyle \frac{𝐠𝐇}{4\pi r}},`$ (8) $`a𝐚𝐇{\displaystyle \frac{𝐪𝐇}{4\pi r}},`$ (9) where $`𝐇`$ is the Cartan subalgebra. We are interested in the case where the expectation value $`𝐛`$ breaks the gauge group $`G`$ maximally to abelian subgroups $`U(1)^r`$. Then, there exists a unique set of simple roots $`𝜷_1,𝜷_2,\mathrm{},𝜷_r`$ such that $`𝜷_\alpha 𝐛>0`$ . The magnetic and electric charges are given by $`𝐠=4\pi {\displaystyle \underset{\alpha =1}{\overset{r}{}}}n_\alpha 𝜷_\alpha ,`$ (10) $`𝐪={\displaystyle \underset{\alpha =1}{\overset{r}{}}}q_\alpha 𝜷_\alpha ,`$ (11) where integer $`n_\alpha 0`$. Any solution to these BPS equations possesses a mass that saturates the BPS bound $$M=Z=𝐛𝐠+𝐚𝐪,$$ (12) where $`Z`$ is the larger one out of two central charges in the $`𝒩=4`$ supersymmetric theory. The solutions of the primary BPS equation describe the collection of 1/2 BPS monopoles. For each simple root, there exists a fundamental monopole of four zero modes. The integer $`n_\alpha `$ denotes the number of the $`𝜷_\alpha `$ fundamental monopoles. We consider the case where all $`n_\alpha `$ are positive so that the monopoles do not separate into mutually noninteracting subgroups. The moduli space of the 1/2 BPS configuration has the dimension of the number of zero modes, $`4_\alpha n_\alpha `$. With the moduli space coordinates $`z^M`$, the zero modes are a linear combination of moduli coordinate dependence and a local gauge transformation. With a simple pseudo four dimensional vector $`A_\mu (𝐱,z^M)=(𝐀,b)`$ with $`\mu =1,2,3,4`$, the zero modes will be $$\delta _MA_\mu =\frac{A_\mu }{z^M}+D_\mu ϵ_M,$$ (13) where $`D_\mu ϵ_M=_\mu ϵ_Mi[A_\mu ,ϵ_M]`$ with understanding $`_4=0`$. The zero mode equations for the primary BPS equation are $`\times \delta _M𝐀=\delta _Mbi[\delta _M𝐀,b],`$ (14) $`D_\mu \delta _MA_\mu =0,`$ (15) where the second equation is the background field gauge fixing condition. From the field theory, there is well defined metric on the moduli space , $$g_{MN}(z)=d^3x\mathrm{tr}\delta _MA_\mu \delta _NA_\mu .$$ (16) The low energy dynamics of 1/2 BPS configurations is given by the nonrelativistic Lagrangian $$L_{1/2}=\frac{1}{2}g_{MN}(z)\dot{z}^M\dot{z}^N.$$ (17) As there are $`r`$ unbroken global $`U(1)`$ symmetries, the corresponding electric charges should be conserved. In another word, $`L_{1/2}`$ should have $`r`$ cyclic coordinates corresponding to these gauge transformations. For each $`𝜷_\alpha `$ $`U(1)`$ symmetry the corresponding cyclic coordinate is denoted by $`\psi ^\alpha `$ with $`\alpha =1,\mathrm{},r`$. Expanding the asymptotic value $`𝐚=_\alpha a^\alpha 𝝀_\alpha `$, where $`𝝀_\alpha `$’s are the fundamental weights such that $`𝝀_\alpha 𝜷_\beta =\delta _{\alpha \beta }`$, we notice $`D_\mu a`$ is the gauge zero mode $$D_\mu a=a^\alpha K_\alpha ^M\delta _MA_\mu ,$$ (18) where $$K_\alpha ^M\frac{}{z_M}=\frac{}{\psi _\alpha }$$ (19) is the Killing vector for the $`𝜷_\alpha `$ $`U(1)`$ symmetry. If we divide the moduli coordinates $`z^M`$ to $`\psi ^\alpha `$ and the rest $`y^i`$, the Lagrangian (17) can be rewritten as $$L_{1/2}=\frac{1}{2}h_{ij}(y)\dot{y}^i\dot{y}^j+\frac{1}{2}L_{\alpha \beta }(y)(\dot{\psi }^\alpha +w_i^\alpha (y)\dot{y}^i)(\dot{\psi }^\beta +w_j^\beta (y)\dot{y}^j).$$ (20) Here $`h_{ij}=g_{ij}`$, $`L_{\alpha \beta }=g_{MN}K_\alpha ^MK_\beta ^N`$, and $`w_i^\alpha =L^{\alpha \beta }g_{\beta i}`$. Notice that all metric components are independent of $`\psi ^\alpha `$. Let us now explore the low energy dynamics of 1/4 BPS configurations. The idea is to calculate the field theoretic Lagrangian for a suitable initial condition in the field theory. It needs to specify the fields and their time derivatives or their momenta. Clearly we require the initial condition to be given by a 1/4 BPS configuration when there is no real time evolution. As the momentum variables $`𝐄`$ and $`D_0b`$ are nonzero for 1/4 BPS configurations, nontrivial time evolution will ensue only if we add additional field momenta or time derivatives to the 1/4 BPS configuration. The moduli space dynamics of 1/2 BPS configurations is correct when the kinetic energy is much smaller than the rest mass. This means the order of the velocities, $`v\dot{z}^M`$ is much smaller than 1. For 1/4 BPS configurations, there is a natural scale $`\eta |𝐚|/|𝐛|`$. We will see that the limit $`\eta <<1`$ is the suitable region for the low energy dynamics. Thus, let us put the initial condition to be $`𝐀(𝐱,y^i),b(𝐱,y^i),a(𝐱,y^i)`$ and the momentum variables, $`𝐃a+\dot{z}^M\delta _M𝐀`$, $`i[a,b]+\dot{z}^M\delta _Mb`$, and $`\dot{z}^M\delta _Ma`$. Here we have replaced the zeroth order momentum variables with the field variables by using the 1/4 BPS equations. We also choose the gauge $`A_0=a`$. $`\delta _Ma`$ cannot be defined by the zero mode equation of the secondary BPS equation. Otherwise the asymptotic form (9) implies nonzero contribution from $`_0𝐪(z)`$ to $`_0a`$. The 1/4 BPS condition involves the field momenta and we cannot include the additional field momenta at a given point of the moduli space, maintaining the 1/4 BPS equations. Rather we choose $`\dot{z}^M\delta _Ma`$ to be an unspecified quantity of order $`\eta v`$, whose exact nature, as we will see soon, is irrelevant for the low energy dynamics. As $`\delta _M𝐀`$ and $`\delta _Mb`$ satisfy the background gauge, the Gauss law is satisfied for the initial condition to order $`v`$. There is a correction of order $`\eta ^2v`$ due to the $`a`$ field, but it is negligible to the order we are working on. Let us now calculate the Lagrangian (1) for this initial condition. It becomes $$L=𝐛𝐠+\frac{1}{2}d^3x\mathrm{tr}\left\{(\dot{z}^M\delta _MA_\mu )^2\right\}+d^3x\mathrm{tr}\left\{\dot{z}^M\delta _M𝐀𝐃a+\dot{z}^M\delta _Mbi[a,b]\right\},$$ (21) where the first order in velocity terms appear as there were nonzero field momenta for the 1/4 BPS configurations. Here again we used 1/4 BPS equations to replace the momenta with the fields. This Lagrangian is of order $`v^2`$ or $`\eta v`$. We have neglected the terms of order $`v^2\eta ^2`$, which comes from the kinetic energy of $`a`$ field. The terms linear in $`\dot{z}^M`$ can be rewritten as a boundary contribution by using the background gauge condition, $$\dot{z}^Md^3x\mathrm{tr}\left\{\delta _M𝐀𝐃a+\delta _Mbi[a,b]\right\}=\dot{z}^Md^3x\mathrm{tr}(a\delta _M𝐀).$$ (22) Noticing that $`𝐃a`$ and $`i[a,b]`$ are a global gauge zero mode, $`a^\alpha K_\alpha ^M\delta _MA_\mu `$, we can rewrite the nonrelativistic Lagrangian as $$L_1=\frac{1}{2}g_{MN}\dot{z}^M\dot{z}^N+g_{MN}\dot{z}^Ma^\alpha K_\alpha ^N$$ (23) where $`𝐛𝐠`$ is omitted. Let us introduce new moduli coordinates $`\{\zeta ^M\}=y^i,\chi ^\alpha `$ such that $$\chi ^\alpha =\psi ^\alpha +a^\alpha t.$$ (24) Since this transformation shifts only cyclic coordinates, the above Lagrangian becomes $$L_{1/4}=\frac{1}{2}g_{MN}(\zeta )\dot{\zeta }^M\dot{\zeta }^N\frac{1}{2}g_{MN}(\zeta )a^\alpha K_\alpha ^Ma^\beta K_\beta ^N.$$ (25) The kinetic term of this Lagrangian is the low energy Lagrangian (17) for 1/2 BPS configurations and there is an additional potential. In terms of the $`y^i,\chi ^\alpha `$ variables, $$L_{1/4}=\frac{1}{2}h_{ij}(y)\dot{y}^i\dot{y}^j+\frac{1}{2}L_{\alpha \beta }(y)(\dot{\chi }^\alpha +w_i^\alpha (y)\dot{y}^i)(\dot{\chi }^\beta +w_j^\beta (y)\dot{y}^j)\frac{1}{2}L_{\alpha \beta }(y)a^\alpha a^\beta .$$ (26) This is exactly the low energy Lagrangian obtained in Ref. . There is a couple of more points to be discussed. The exact 1/4 BPS configuration is static in $`y^i`$ and $`\psi ^\alpha `$ coordinates, so that $`\chi ^\alpha =a^\alpha t+\mathrm{constant}\mathrm{term}`$. The velocity of $`\chi ^\alpha `$ coordinates is of order $`\eta `$, which is all right as $`v`$ and $`\eta `$ can be of the same order. When we define the Hamiltonian, the $`z^M`$ coordinates are not appropriate. Again from the field theory, the energy function we have has the contribution from the momentum variables. In terms of $`z^M`$ variables, the field theoretic energy functional for our initial condition becomes $$E=𝐛𝐪+𝐚𝐪+L_1.$$ (27) In terms of the $`\{\zeta ^M\}=\{y^i,\chi ^\alpha \}`$ variables, this energy becomes $$E=𝐛𝐪+E_{1/4},$$ (28) where $`E_{1/4}`$ is the energy corresponding to the Lagrangian (25), $$E_{1/4}=\frac{1}{2}g_{MN}(\zeta )\dot{\zeta }^M\dot{\zeta }^N+\frac{1}{2}g_{MN}(\zeta )a^\alpha K_\alpha ^Ma^\beta K_\beta ^N.$$ (29) Here we have used the Tong formula $$𝐚𝐪=g_{MN}(\zeta )a^\alpha K_\alpha ^Ma^\beta K_\beta ^N.$$ (30) The energy $`E_{1/4}`$ has a BPS bound, which is saturated when $`\dot{z}^M=0`$ or $`\dot{\zeta }^M=a^\alpha K_\alpha ^M`$, and has the value of the electric mass $`𝐚𝐪`$. This nonrelativistic BPS configuration describes the field theoretic 1/4 BPS configurations. Thus, a consistent picture of the moduli space has been emerged. As shown in the Ref. , the above Lagrangian can be generalized to supersymmetric case so that it describes 1/4 BPS dyons in the $`N=4`$ supersymmetric Yang-Mills theory. There exists naturally quantum BPS bound on this supersymmetric Lagrangian. In Ref. , the quantum 1/4 BPS states are found by solving the quantum BPS conditions on the wave functions for the case of the $`SU(3)`$ group. Acknowledgments Part of this work is accomplished during our stay for Particles Fields and Strings ’99 Conference of Pacific Institute for the Mathematical Sciences at Vancouver, for which we thank the hospitality of the center. We acknowledge a useful discussion with Piljin Yi. D.B. is supported in part by Ministry of Education Grant 98-015-D00061. K.L. is supported in part by the SRC program of the SNU-CTP and the Basic Science and Research Program under BRSI-98-2418. D.B. and K.L. are also supported in part by KOSEF 1998 Interdisciplinary Research Grant 98-07-02-07-01-5.
no-problem/9909/chao-dyn9909038.html
ar5iv
text
# Universality and saturation of intermittency in passive scalar turbulence \[ ## Abstract The statistical properties of a scalar field advected by the non-intermittent Navier-Stokes flow arising from a two-dimensional inverse energy cascade are investigated. The universality properties of the scalar field are directly probed by comparing the results obtained with two different types of injection mechanisms. Scaling properties are shown to be universal, even though anisotropies injected at large scales persist down to the smallest scales and local isotropy is not fully restored. Scalar statistics is strongly intermittent and scaling exponents saturate to a constant for sufficiently high orders. This is observed also for the advection by a velocity field rapidly changing in time, pointing to the genericity of the phenomenon. The persistence of anisotropies and the saturation are both statistical signatures of the ramp-and-cliff structures observed in the scalar field. \] Ramp-and-cliff structures are a characteristic feature of fields, like dye concentration or temperature, obeying the passive scalar equation (see, e.g., Refs. ) : $$_tT(𝒓,t)+𝒗(𝒓,t)\mathbf{}T(𝒓,t)=\kappa \mathrm{\Delta }T(𝒓,t),$$ (1) i.e. advected by the velocity $`𝒗`$ and smeared out by the molecular diffusivity $`\kappa `$. Scalar gradients tend indeed to concentrate in sharp fronts separated by large regions of weak gradients (see Fig. 1). The experimental evidence for ramps and cliffs is long-standing and massive . Furthermore, numerical simulations indicate that scalar structures are not mere footprints of those in $`𝒗`$ and appear also for synthetic flow . The presence of ramp-and-cliff structures raises some important issues about scalar turbulence and its intermittency properties. Following Kolmogorov’s 1941 theory, it is indeed usually assumed that turbulence restores universality, i.e. independence of the large-scale injection mechanisms, and isotropy at small scales (see Ref. ). The evidence for scalar turbulence is however that anisotropies find their way down to the small scales, manifesting in the scalar gradient skewness of $`O(1)`$, independently of the Péclet number . This is due to the preferential alignment of ramp-and-cliff structures with large-scale scalar gradients, present in most experimental situations. For structure functions $`S_n(𝒓)=\left(T(𝒓)T(\mathrm{𝟎})\right)^n`$, this persistence is revealed by normalized odd orders $`S_{2n+1}/S_2^{n+1/2}`$ decaying more slowly than the expected $`r^{2/3}`$ (see Ref. ). Is this experimentally observed behavior signalling that small scales are fully imprinted by the large scales and that the universality framework should be discarded altogether ? This is the first issue, raised in Refs. , that we shall investigate in this Letter. The second is about the consequences of cliffs for high-order intermittency. Their strength candidates them for the dominant contributions to strong event statistics and the issue raised in Ref. is whether structure function scaling exponents are then saturating to a constant for high orders $`n`$. Numerical simulations are an ideal tool to analyze the previous questions, allowing to probe universality, by comparing the results obtained with two different types of injection, and saturation, by gathering enough statistics to capture strong events. Here, we shall take for $`𝒗`$ a 2D flow generated by a Navier-Stokes inverse energy cascade . Universality is then understood as dependence of scalar properties on the injection mechanisms for this fixed $`𝒗`$ statistics. The scalar is injected at large scales, comparable to those where the inverse cascade is stopped by friction effects, and its properties are investigated in the energy inertial range (see Ref. for details). There, the velocity is isotropic, scale-invariant with exponent $`1/3`$ (no intermittency corrections to Kolmogorov scaling ) and has dynamical correlation times (finite and free of synthetic flow pathologies discussed in Ref. ). As for scalar injection, a first choice is naturally suggested by experiments, where it usually takes place via a large-scale gradient. We assume then, as in Refs. , that the average $`T=𝒈𝒓`$ and we integrate the equation for the fluctuations $`\theta =T𝒈𝒓`$, i.e. (1) with a source term $`𝒗𝒈`$ on the right hand side. A snapshot of the $`\theta `$ field is shown in Fig. 1. The presence of the gradient $`𝒈`$ breaks isotropy and allows for asymmetries and non-vanishing odd-order moments in the scalar statistics. The second choice is a more artificial random forcing $`f(𝒓,t)`$ added to (1). Its motivation is to produce an isotropic statistics, e.g. by taking $`f`$ Gaussian, with zero average and correlation function $`f(𝒓,t)f(\mathrm{𝟎},0)=\delta (t)\chi (r/L)`$. The scale $`L`$ where the injection is concentrated is taken comparable to the velocity integral scale. The equations for the scalar are integrated in parallel to the 2D Navier-Stokes equation for about 100 eddy turn-over times by a standard pseudo-spectral code on a $`2048^2`$ grid. In the runs presented in the following, the diffusive term is replaced by a bi-Laplacian, but it was checked by another series of simulations that using a Laplacian gives consistent results, although on less extended scaling ranges. Let us first show that the persistence of anisotropies observed in experiments occurs also in our case. Odd-order structure functions vanish in the randomly forced case. In the shear case they do not, except for separations $`𝒓𝒈`$. For non-orthogonal $`𝒓`$’s, the scaling exponents do not depend on the direction $`𝒓`$ and in Fig. 2 we present the parallel structure functions, i.e. $`𝒓`$ aligned with $`𝒈`$. The resulting third-order skewness $`S_3/S_2^{3/2}`$ scales as $`r^{0.25}`$, the 2nd-order exponent being $`2/3`$ (see Fig. 4). As in the experiments, the skewness decay is slower than the expected $`r^{2/3}`$. Furthermore, here enough statistics is accumulated to give access also to the 5th-order. The persistence effect is now dramatic as $`S_5/S_2^{5/2}r^{0.2}`$ increases at small scales. Intermittency generates of course an ambiguity in the normalization, e.g. $`S_5/S_4^{5/4}`$ is decaying, albeit very slowly. This reflects the fact that scalar increment pdf’s change shape with $`r`$ and one should then be specific about which part of it is sampled and the choice of the observable representative of the anisotropy degree. It is however unambiguously clear that local isotropy is not fully restored at small scales and the quality of the scaling laws found here indicates that this is a genuine effect, not related to finite Péclet numbers. More insight into this breaking of full universality is gained by analyzing scalar increment pdf’s and moments of even order, which are non-vanishing for both types of forcing. Fig. 3 shows that the pdf’s for the two types of injections do not have the same shape (the same holding when symmetric parts are taken). In the shear case, the separations $`𝒓`$ have been taken along the diagonal directions, at angles $`\varphi =\pi /4`$ with respect to $`𝒈`$. This choice is motivated by the application of the procedure developed in Refs. to the 2D case and permits removal of the first subleading anisotropic contribution $`\mathrm{cos}2\varphi `$ to even-order moments. The fact that the pdf’s have different shapes implies that the adimensionalized constants $`C_n`$ in structure functions $`S_n(r)=C_n\left(ϵr\right)^{n/3}\left(L/r\right)^{n/3\zeta _n}`$ are not universal, as it was also explicitly checked by direct comparison. Conversely, in Fig. 4 it is shown that scaling exponents of even order moments are the same for the two types of forcing. For the pdf’s this means that, although having different shapes, the curves are rescaling with $`r`$ in the same way. The picture emerging from these results is as follows : structure function exponents $`\zeta _n`$ are universal, while constants, and thus the pdf’s of scalar increments, are not. The difference between isotropic and anisotropic situations is that the non-universal constants $`C_{2n+1}`$ in odd-order structure functions vanish by symmetry for the former case, while they generically do not for the latter. Structure functions present anomalous scaling and there is no full restoration of isotropy while going toward small scales. This picture of universality is weaker than in Kolmogorov’s 1941 theory, but coincides with the one emerged for intermittency in the Kraichnan passive scalar model (see also Ref. ). The velocity used here has finite correlation times, scalar correlation functions do not obey closed equations, yet the universality properties are the same. This points to a broader validity of the mechanisms identified for the Kraichnan model and it is likely that the same universality framework generically applies to passive scalar turbulence. Let us now discuss the consequences of cliffs for the intermittency at high orders. Their singularity strength suggests that the scaling exponents might saturate at large orders, i.e. $`\zeta _n`$ tends to a constant $`\zeta _{\mathrm{}}`$ for large enough $`n`$. Physical self-consistency for the survival of steepening strong fronts is demonstrated in Ref. , where saturation is shown to imply that dissipation preferentially spares the cliffs with the largest jumps. High-order structure functions in our simulations are shown in Fig. 5, together with the $`\zeta _n`$ vs $`n`$ curve, compatible with saturation. The same holds for ratios of two moments vs $`r`$ or one moment vs the other. Note that, for any finite-size field, there are orders where the moments start to be spoiled and some strongest single structure having a dissipation width will plausibly dominate the statistics, as in Burgers’ equation . The convergence of the moments was inspected by the usual test of checking that $`(\delta _rT)^{14}𝒫`$ decays before the pdf $`𝒫(\delta _rT)`$ of the scalar increments $`\delta _rTT(r)T(0)`$ becomes noisy. An alternative observable more reliable than moments (as less sensitive to the extreme tails) is looking, for fixed $`\delta _rT`$, at how $`𝒫`$ varies with the separation $`r`$. Saturation is equivalent to the pdf taking the form $`𝒫(\delta _rT)=r^\zeta _{\mathrm{}}𝒬(\delta _rT/T_{rms})`$ for $`\delta _rT`$ sufficiently larger than $`T_{rms}=\left(TT\right)^2^{1/2}`$. The collapse of the curves $`r^\zeta _{\mathrm{}}𝒫(\delta _rT)`$ in Fig. 6 is therefore a signature of saturation and gives the unknown function $`𝒬`$. In Fig. 7, we plot the cumulated probabilities $`_{\delta _rT}^{\mathrm{}}𝒫`$ vs $`r`$ for various $`\delta _rT`$ and the parallelism of the curves is again the footprint of saturation. Explicit evidence for the universality of $`\zeta _{\mathrm{}}`$ is provided in Fig. 6. The physical origin of cliffs resides in the Lagrangian structure of (1), i.e. in the fact that particles are passively transported by the velocity $`𝒗`$. In regions where velocity gradients are sufficiently persistent in space and time, widely spaced particles tend to approach and generate the observed abrupt variations of the scalar field. This suggests that, even though quantitative aspects, such as the order of saturation or the value $`\zeta _{\mathrm{}}`$, depend on the choice of $`𝒗`$, the saturation phenomenon itself should occur for a wide class of random velocity fields. The Kraichnan model is unfavorable for saturation because of the short velocity correlation time. Despite this, for large dimensionalities of space, saturation analytically follows from an instanton solution . For the 3D case, saturation was phenomenologically suggested in Ref. and inferred from an instantonic bound in Ref. . Direct numerical evidence is provided by our 3D numerical simulations whose results are presented in Fig. 8. Scaling exponents have been measured using the Lagrangian method presented in Ref. and $`(2\gamma )/2`$ is the spatial Hölder exponent of $`𝒗`$, as in Ref. . The order of the moments needed to observe saturation is expected to diverge for $`\gamma 2`$, while for $`\gamma 0`$ the action of large-scale gradients should favor close approaches between particles. The order is thus expected to reduce with $`\gamma `$ and for the smoothest velocity in Fig. 8 saturation is indeed occurring already at the 4th-order and thus becomes observable. This confirms the physical picture of saturation due to the cliffs formed in the scalar field and the genericity of the phenomenon for scalar turbulence intermittency. Acknowledgements. Helpful discussions with M. Chertkov, A. Fairhall, U. Frisch, B. Galanti, R.H. Kraichnan, V. Lebedev, A. Noullez, J.F. Pinton, I. Procaccia and A. Pumir are gratefully acknowledged. We benefited from the hospitality of the 1999 ESF-TAO Study Center. The INFM PRA Turbo (AC) and the ERB-FMBI-CT96-0974 (AL) contracts are acknowledged. Simulations were performed at IDRIS (no 991226) and at CINECA (INFM Parallel Computing Initiative).
no-problem/9909/hep-lat9909066.html
ar5iv
text
# UKQCD’s latest results for the static quark potential and light hadron spectrum with 𝑂⁢(𝑎) improved dynamical fermions. ## 1 Introduction Previous simulations at fixed $`\beta `$ with several values of $`\kappa _{\mathrm{sea}}`$, have shown a strong dependence on the lattice spacing as $`\kappa _{\mathrm{sea}}`$ is varied . This complicates the chiral extrapolations and obscures comparisons with quenched simulations. UKQCD have proposed that simulations should be carried out at fixed lattice spacing, $`a`$, for different values of $`\kappa _{\mathrm{sea}}`$. In this way it is possible to study the effect of varying the sea quark mass at the same effective lattice volume. The lattice spacing is fixed by tuning the bare parameters, $`\beta `$ and $`\kappa _{\mathrm{sea}}`$. This is achieved by comparing a lattice observable with the physical value. In our case the Sommer scale, $`r_0`$, has been used, where, $$F(r_0/a)(r_0/a)^2=1.65,r_0=0.49\mathrm{fm}$$ (1) and $`F(r_0/a)`$ is the force between a static quark anti-quark pair. The Sommer scale was selected as it can be determined with good statistical precision and is independent of the valence quarks, avoiding the need for extrapolations. The details of the matching technique can be found in . ## 2 Simulation parameters The simulations are performed with two flavours of dynamical fermions. We use the standard Wilson gauge action together with the $`O(a)`$ improved Wilson fermion action. The clover coefficient used was determined non-perturbatively by the Alpha Collaboration. All simulations were carried out on a $`16^3\times 32`$ lattice. The parameters for the matched ensembles are shown in Table 1. The last entry shows a simulation at the lightest $`\kappa _{\mathrm{sea}}`$ which is not matched. Table 2 shows the results for the lattice spacing and $`r_0/a`$ which were obtained using the method described in . This corresponds to an effective lattice volume of approximately 1.7 fm for the matched simulations. ## 3 Static quark potential The standard form for the static quark potential $$V(r)=V_0+\sigma r\frac{e}{r},$$ (2) can be rescaled in terms of $`r_0`$ as $$[V(r)V(r_0)]r_0=(1.65e)\left(\frac{r}{r_0}1\right)e\left(\frac{r_0}{r}1\right)$$ (3) Figure 1 shows the results for the static quark potential compared with eqn. 3. We observe good agreement with the universal fit $`\pi /12r+\sigma r`$. With these results there is no indication of string breaking at large $`r/r_0`$. However the plot of the deviation from the model shows significant discretisation errors. At short distances where the fits have to take this into account, there is some evidence that the lighter quark data lie below the heavier quark data. Parametric fits for the $`1/r`$ coefficient $`e`$, see Figure 2, show an increase for the dynamical data of $`15\%\pm 4\%`$. This is consistent with perturbation theory which suggests an increase for $`e`$ of around $`14\%`$ for $`\mathrm{N}_\mathrm{f}=2`$. ## 4 Light hadron spectrum Hadron masses were obtained from correlated least-$`\chi ^2`$ fits. The mesons have been fitted by a double cosh fit to local and fuzzed correlators simultaneously. Baryons are fitted by a single exponential fit to fuzzed correlators only. The ratio of the pseudoscalar to vector masses is shown in Table 3 for $`\kappa _{\mathrm{sea}}=\kappa _{\mathrm{val}}`$. One way to look for dynamical effects in the spectrum is to compare the pseudoscalar and vector meson masses as $`\kappa _{\mathrm{sea}}`$ is varied. Figure 3 shows a plot of $`m_\mathrm{V}`$ against $`m_{\mathrm{PS}}^2`$ for all data sets. Since $`\beta `$ is different for each data set, the results for the meson masses are shown in units of $`r_0`$. Points corresponding to $`\kappa _{\mathrm{sea}}=\kappa _{\mathrm{val}}`$ are indicated by arrows. This plot shows that there is a trend towards the experimental points as $`\kappa _{\mathrm{sea}}`$ becomes lighter. Preliminary analysis of the spectrum has been conducted in the partially quenched scheme where the partially quenched quark mass is defined as $$m_q^{\mathrm{PQ}}=\frac{1}{2}\left(\frac{1}{\kappa _{\mathrm{val}}}\frac{1}{\kappa _{\mathrm{crit}}}\right)$$ (4) Here $`\kappa _{\mathrm{crit}}`$ has been determined from an extrapolation in the improved valence quark mass for each data set, $`am_{\mathrm{PS}}^2\stackrel{~}{m}_q(\kappa _{\mathrm{val}})`$, using $`b_\mathrm{m}`$ from perturbation theory. The pseudoscalar extrapolation as a function of $`m_q^{\mathrm{PQ}}`$ is shown for all data sets in Figure 4. A straight line has been fitted to the matched data sets, including the quenched simulation, using an uncorrelated fit. Data points from the lightest $`\kappa _{\mathrm{sea}}`$ simulation have been included in the plot. These points clearly have a different slope from the matched data sets. ## 5 Conclusions We have seen some evidence of screening in the static quark potential for dynamical $`\mathrm{N}_\mathrm{f}=2`$ simulations. Using data sets which have been matched to have the same effective lattice volume helps to disentangle the screening effects from the discretisation errors in the potential. Extrapolations of the pseudoscalar mass as a function of the partially quenched quark mass, show that the slope is consistent for the matched ensembles. Further analysis of this type of extrapolation is in progress.
no-problem/9909/hep-lat9909036.html
ar5iv
text
# Flavour-singlet pseudoscalar and scalar mesons ## 1 INTRODUCTION To study flavour singlet mesons, we need to consider quark loops which are disconnected (often called hairpins), namely evaluate $`\mathrm{Tr}\mathrm{\Gamma }\mathrm{M}^1`$ where the sum (trace) is over all space (for zero momentum) at a given time value and all colours and spins. Here $`M`$ is the lattice fermion matrix, $`\mathrm{\Gamma }`$ is a combination of the appropriate $`\gamma `$-matrix and a product of gauge links if a non-local operator is used for the meson. Using a random volume source $`\xi `$ (where $`\xi ^{}\xi =1`$ for the same colour, spin and space-time component and zero otherwise) then solving $`M\varphi =\xi `$, one can evaluate unbiased estimates of the propagator $`M_{xy}^1`$ from $`\xi _y^{}\varphi _x`$ where the average is over the $`N_S`$ samples of the stochastic source. The drawback of this approach is that the variance on these estimates can be very large, so that typically hundreds of samples are needed. Here we present a method which succeeds in reducing this variance substantially at rather small computational expense. The variance reduction is based on expressing the fermion matrix $`M`$ as $$M=C+D=C(1+C^1D)=(1+DC^1)C$$ (1) where $`C`$ is easy to invert, for example the SW-clover term which is local in space. Then we have the exact identity $`M^1`$ $`=`$ $`C^1C^1DC^1+\mathrm{}+(C^1D)^mC^1`$ (2) $`+`$ $`(C^1D)^{n_1}M^1(DC^1)^{n_2}`$ with $`n_1+n_2=n=m+1`$. Using the stochastic estimate for $`M^1`$ on the rhs will reduce the variance of the estimate given by the lhs since the terms not involving $`M^1`$ can be evaluated either exactly (for example terms with odd powers of $`D`$ vanish in the evaluation of a local trace) or as a subsidiary stochastic calculation with more samples since no inversion is required. A special case of this ($`n_1=n_2=2`$) with Wilson fermions (for which $`C=1`$ and the terms with up to 3 powers of $`D`$ vanish for $`\mathrm{TrM}^1`$) was used by the bermion group previously. Using larger values of $`n_1`$ and $`n_2`$ implies that the estimate of $`M^1`$ is very non-local. To evaluate correlators between traces at $`t_1`$ and $`t_2`$, one must require that the samples of stochastic volume source used in the two cases are different so that there is no bias. We use $`N_S=24`$ stochastic samples and this condition is readily implemented. This number of samples was chosen to make the stochastic sampling error smaller (actually about 50% in the cases to be discussed next) than the intrinsic variance from one gauge configuration to another. The computational effort is equivalent to that in obtaining two conventional propagators (from two sources of all colour-spins) We now apply this technique to 12<sup>3</sup> 24 lattices with $`N_f=2`$ at $`\beta =5.2`$ with $`C_{SW}=1.76`$ from UKQCD . We use valence quark masses equal to the sea-quark mass determined by the $`\kappa `$-values as given in Table 1. In order to improve the statistics we measure the disconnected diagrams on configurations separated by less trajectories than for the connected correlators giving 253, 169 configurations at $`\kappa =0.1395`$, 0.1398, respectively. ## 2 PSEUDOSCALAR MESONS Since we are exploring $`N_f=2`$ we describe the flavour non-singlet ($`I=1`$) pseudoscalar meson as $`\pi `$ and the flavour singlet ($`I=0`$) as $`\eta `$. It is possible to apply our results to the physical case of 3 light quark flavours and the $`\eta ,\eta ^{}`$ splitting as has been done from the quenched case , but we do not discuss this here. We use local and fuzzed operators matching the calculation of the connected meson propagators . Then including our results for the hairpin diagrams with the appropriate sign, we can fit the flavour singlet correlator to the $`\eta `$ mass and excited states. Even though, as discussed above, we have measured the disconnected quark loop correlator from every time slice to every other with a stochastic error which is effectively negligible, the signal is very noisy. This arises since the disconnected part of the $`\eta `$ correlation has an error which does not decrease with increasing time separation $`T`$, unlike the connected correlator where the error is roughly constant as a percentage of the signal with increasing $`t`$. To illustrate this, we show the ratio of disconnected $`D`$ to connected component $`C`$ of the $`\eta `$ correlator in fig. 1. The line shows the expectation coming from one state ($`\pi `$) contributing to $`C`$ and one state ($`\eta `$) contributing to $`CD`$ with a mass difference in lattice units of 0.16. There is some sign in our data of the impact of unitarity for matched valence and sea quarks: namely that $`D/C`$ approaches 1 from below at large $`t`$. Making a fit with this constraint to $`D/C`$, it is possible to estimate the $`\eta \pi `$ mass difference and we see that it does increase as the sea quark mass is decreased. Assuming that the $`\eta `$ mass is constant as the quark mass is decreased, we would get an $`\eta `$ mass in the chiral limit of around $`800`$ MeV, of course with an uncontrolled systematic error Because the flavour singlet correlator becomes noisy so rapidly with increasing time separation, it is very valuable to use a more extended basis for the meson (eg. local and fuzzed operators) to help in extracting the ground state signal. The 2 state fit from $`t=3`$ to 9 is shown in Table 1. The zero-momentum correlator has contributions increasing like $`L^3`$ for spatial volume $`L^3`$ whereas the noise increases like $`L^6`$. Thus we find a better signal to noise ratio for smaller volumes - hence we present here only the results for $`L=12`$. ## 3 SCALAR MESONS For scalar mesons, we have the interesting situation that the $`q\overline{q}`$ mesons and the glueball can mix. Within the quenched approximation, it is possible to estimate this mixing . We make a preliminary study in the quenched approximation at $`\beta =5.7`$ of 100 configurations on 12<sup>3</sup> 24 lattices with SW-clover valence quarks having $`\kappa =0.14077`$ with $`C_{SW}=1.57`$ (here $`a1`$ GeV and the quark mass is close to strange). We use 4 scalar meson operators (i) closed Wilson loops (glueball operators) of two different sizes (Teper-smeared) and (ii) $`q\overline{q}`$ operators which are local and separated by fuzzed links. Using the glueball mass of 0.97(4) from ref. and scalar $`q\overline{q}`$ mass of 1.48(15) from fitting the connected correlations, we are able to fit the disconnected correlation and the hairpin-Wilson loop correlation from $`t`$-values of 1 to 3 with a mixing given by $`Ea=0.4`$ assuming $`N_f=2`$. This fit (see fig. 2) assumes only ground state contributions so that the systematic error on $`E`$ from this assumption is hard to estimate. The mixing estimated by ref is similar in magnitude at these lattice parameters ($`E0.3`$ GeV) but they claim that on extrapolation to the continuum limit a much smaller value is obtained. Here we are using clover improvement so order $`a`$ effects are suppressed. If our quenched mixing strength were be to applied to the scalar mass matrix, it results in a downward shift for $`N_f=2`$ of the lattice glueball mass by 20%. We can now measure directly the scalar spectrum for $`N_f=2`$ to explore this. We again use both glueball and $`q\overline{q}`$ operators and fit with 2 states the $`4\times 4`$ matrix of correlations for $`t`$ from 2 to 10: the results is shown as $`m_{FS}a`$ in Table 2. We find that the mass obtained from fitting only the glueball correlations ($`m_{GB}`$) is consistent with the full fit, as it should be. Moreover, we see a surprisingly low scalar mass - as emphasised in fig. 3 which compares with quenched results and the SESAM $`N_f=2`$ values . We do expect a relatively light flavour-singlet scalar mass because of mixing effects as described above which would reduce the mass by 20%. This could explain in part our low scalar mass but other explanations are also worth exploring. For example the order $`a^2`$ corrections might be anomalously large for our lattice implementation (e.g. twice as large as in the quenched Wilson case). Another possible explanation of the light flavour-singlet scalar mass we find (comparable to our pion mass) would be a partial restoration of chiral symmetry in a finite volume. Our spatial size is 1.7 fm and no evidence of finite size effects was seen in a study of flavour non-singlet correlators . We have made a preliminary study of flavour singlet correlators on $`16^3`$ spatial lattices to check for finite size effects and, although the signal from the larger spatial volume is relatively noisier, we do see some evidence (at the $`2\sigma `$ level) of a higher scalar mass on the larger spatial lattice.
no-problem/9909/astro-ph9909433.html
ar5iv
text
# MPS on the Hunt for Planets ### Acknowledgments. This was supported in part by the NSF and NASA. ## References Bennett et al 1999, Nature, submitted, astro-ph/9908038 Bennett and Rhie 1997, ApJ, 472, 660 Elachi et al 1995, http://techinfo.jpl.nasa.gov/www/ExNPS/OV.html Gould and Loeb 1992, ApJ, 396, 104 Griest and Safizadeh 1998, ApJ, 500, 37 Rhie et al 1999a, ApJ, in print; 1999b, ApJ, submitted, astro-ph/9905151 Rhie and Bennett 1999, in preparation Thorne, Price, and MacDonald 1986, The Black Holes: The Membrane Paradigm, Yale University Press
no-problem/9909/hep-ph9909513.html
ar5iv
text
# References OLD AND NEW PROCESSES OF VORTON FORMATION Brandon Carter D.A.R.C., Observatoire de Paris 92 Meudon, France Contribution to 35th Karpacs meeting (ed. J. Kowalski-Glikman) Polanica, Poland, February 1999. Astract After a brief explanation of the concept of a vorton, quantitative estimates of the vorton population that would be produced in various cosmic string scenarios are reviewed. Attention is drawn to previously unconsidered mechanisms that might give rise to much more prolific vorton formation that has been envisaged hitherto. This review is an updated version of a previous very brief overview of the theory of vortons, meaning equilibrium states of cosmic string loops, and of the cosmological processes by which they can be produced in various scenarios. The main innovation here is to draw attention to the possibility of greatly enhanced vorton formation in cases for which the cosmic string current is of the strictly chiral type that arises naturally in certain kinds of supersymmetric field theory. It is rather generally accepted that among the conceivable varieties of topological defects of the vacuum that might have been generated at early phase transitions, the vortex type defects describable on a macrosopic scale as cosmic strings are the kind that is most likely to actually occur – at least in the post inflationary epoch – because the other main categories, namely monopoles and walls, would produce a catastrophic cosmological mass excess. Even a single wall stretching accross a Hubble radius would by itself be too much, while in the case of monopoles it is their collective density that would be too high unless the relevant phase transition occurred at an energy far below that of the G.U.T. level, a possibility that is commonly neglected on the grounds that no monopole formation occurs in the usual models for the transitions in the relevant range, of which the most important is that of electroweak symmetry breaking. The case of cosmic strings is different. One reason is that – although they are not produced in the standard electroweak model – strings are indeed produced at the electroweak level in many of the commonly considered (e.g. supersymmetric) alternative models. A more commonly quoted reason why the case of strings is different, even if they were formed at the G.U.T level, is that – while it may have an important effect in the short run as a seed for galaxy formation – such a string cannot be cosmologically dangerous just by itself, while a distribution of cosmic strings is also cosmologically harmless because (unlike “local” as opposed to “global” monopoles) they will ultimately radiate away all their energy and disappear. However while this latter consideration is indeed valid in the case of ordinary Goto-Nambu type strings, it was pointed out by Davis and Shellard that it need not apply to “superconducting” current-carrying strings of the kind originally introduced by Witten. This is because the occurrence of stable currents allows loops of string to be stabilized in states known as “vortons”, so that they cease to radiate. The way this happens is that the current, whether timelike or spacelike, breaks the Lorentz invariance along the string worldsheet , thereby leading to the possibility of rotation, with velocity $`v`$ say. The centrifugal effect of this rotation, may then compensate the string tension $`T`$ in such a way as to produce an equilibrium configuration, i.e. what is known as a vorton, in which $$T=v^2U,$$ (1) where $`U`$ is the energy per unit length in the corotating rest frame. Such a vorton state will be stable, at least classically, if it minimises the energy for given values of the pair of conserved quantities characterising the current in the loop, namely the phase winding number $`N`$ say, and the corresponding particle number $`Z`$ say, whose product determines the mass $`M`$ of the ensuing vorton state according to a rough order of magnitude formula of the form $$M|NZ|^{1/2}m_\mathrm{x}$$ (2) where $`m_\mathrm{x}`$ is the relevant Kibble mass, whose square is the zero current limit value of both $`T`$ and $`U`$. If the current is electromagnetically coupled, with charge coupling constant $`e`$, then there will be a corresponding vorton charge $`Q=Ze`$. Whereas the collective energy density of a distribution of non-conducting cosmic strings will decay in a similar manner to that of a radiation gas, in contrast for a distribution of relic vortons the energy density will scale like that of ordinary matter. Thus, depending on when and how efficiently they were formed, and on how stable they are in the long run, such vortons might eventually come to dominate the density of the universe. It has been rigorously established that circular vorton configurations of this kind will commonly (though not always) be stable in the dynamic sense at the classical level, but very little is known so far about non-circular configurations or about the question of stability against quantum tunnelling effects, one of the difficulties being that the latter is likely to be sensitively model dependent. In the earliest crude quantitative estimates of the likely properties of a cosmological vorton distribution produced in this way, it was assumed not only that the Witten current was stable against leakage by tunnelling, but also that the mass scale $`m_\sigma `$ characterising the relevant carrier field was of the same order of magnitude as the Kibble mass scale $`m_x`$ characterising the string itself, which will normally be given approximately by the mass of the Higgs field responsible for the relevant vacuum symmetry breaking. The most significant development in the more detailed investigations carried out more recently was the extension to cases in which $`m_\sigma `$ is considerably smaller than $`m_x`$. A rather extreme example that immediately comes to mind is that for which $`m_x`$ is postulated to be at the G.U.T. level, while $`m_\sigma `$ is at the electroweak level in which case it was found that the resulting vorton density would be far too low to be cosmologically significant. The simplest scenarios are those for which (unlike the example just quoted) the relation $$\sqrt{m_\sigma }\mathrm{}>m_\mathrm{x}$$ (3) is satisfied in dimensionless Planck units as a rough order of magnitude inequality. In this case the current condensation would have ocurred during the regime in which (as pointed out by Kibble in the early years of cosmic string theory) the dynamics was dominated by friction damping. Under these circumstances, acording to the standard picture, the string distribution will consist of wiggles and loops of which the most numerous will be the shortest, characterised by a length scale $`\xi `$ say below which smaller scale structure will have been smoothed out by friction damping. The number density $`n`$ of these smallest and most numerous loops will be given by the (dimensionally obvious) formula $$n\xi ^3,$$ (4) in which the smoothing length scale $`\xi `$ itself is given by $$\xi \sqrt{t\tau },$$ (5) where $`\tau `$ is the relevant friction damping timescale and $`t`$ is the cosmological time, which, using Planck units, will be expressible in terms of the cosmological temperature $`\mathrm{\Theta }`$ by $$t\mathrm{\Theta }^2,$$ (6) in the radiation dominated epoch under consideration. According to the usual description of the friction dominated epoch , the relevant damping timescale will be given by $$\tau m_\mathrm{x}^{\mathrm{\hspace{0.17em}2}}\mathrm{\Theta }^3,$$ (7) from which it can be seen that the smoothing lengthscale $`\xi `$ that characterises the smallest and most numerous string loops will be given roughly by the well known formula $$\xi m_\mathrm{x}\mathrm{\Theta }^{5/2}.$$ (8) At the time of condensation of the current carrier field on the strings, when the temperature reaches a value $`\mathrm{\Theta }m_\sigma `$, the corresponding thermal fluctuation wavelength $`\lambda `$ will be given by $$\lambda m_\sigma ^1.$$ (9) Taken around the circumference, of order $`\xi `$, of a typical small string loop, the number of such fluctuation wavengths will be of order $`\xi /\lambda `$. In the cases considered previously it was assumed that the fluctuations would be randomly orientated and would therefore tend to cancel each other out so that, by the usual kind of random walk process the net particle and winding numbers taken around the loop as a whole would be expected to be of the order of the square root of this number of wavelengths, i.e. one would typically obtain $$NZ\sqrt{\xi /\lambda }.$$ (10) However a new point to which I would like to draw attention here is that the random walk cancellation effect will not apply in case for which the current is of strictly chiral type so that the string dynamics is of the kind whose special integrability properties have recently been pointed out . This case arises when the string current is attributable to (necessarily uncharged) fermionic zero modes moving in an exlusively rightwards (or exclusively leftwards) direction. In such a case a loop possibility of cancellation between left moving and right moving fluctuations does not arise so that (as in the ordinary kind of diode rectifier circuit used for converting alternating current to direct curent) there is an effective filter ensuring that the fluctuations induced on the string will all have the same orientation. In such a case only one of the quantum numbers in the formula (2) will be independent, i.e. they will be restricted by a relation of the form $`N=Z`$, and their expected value will be of the order of the total number of fluctuation wavelengths round the loop (not just the square root thereof as in the random walk case). In such a strictly chiral case the formula (2) should therefore be evaluated using an estimate of the form $$N=Z\xi /\lambda ,$$ (11) instead of (10) Whereas even smaller loops will have been entirely destroyed by the friction damping process, those that are present at the time of the current condensation can survive as vortons, whose number density will be reduced in inverse proportion to the comoving volume, i.e. proportionally to $`\mathrm{\Theta }^3`$, relative to the initial number density value given by (4) when $`\mathrm{\Theta }m_\sigma `$. Thus (assuming the current on each string is strictly conserved during the subsequent evolution) when the cosmological temperature has fallen to a lower value $`\mathrm{\Theta }m_\sigma `$, the expected number density $`n`$ of the vortons will be given as a constant fraction of the corresponding number density $`\mathrm{\Theta }^3`$ of black body photons by the rough order of magnitude formula $$\frac{n}{\mathrm{\Theta }^3}\left(\frac{\sqrt{m_\sigma }}{m_x}\right)^3m_\sigma ^{\mathrm{\hspace{0.17em}3}}.$$ (12) In the previously considered cases , for which the random walk formula (10) applies, the typical value of the quantum numbers of vortons in the resulting population will be given very roughly by $$NZm_\mathrm{x}^{\mathrm{\hspace{0.17em}1}/2}m_\sigma ^{3/4},$$ (13) which by (2) implies a typical vorton mass given by $$M\left(\frac{m_x}{\sqrt{m_\sigma }}\right)^{3/2},$$ (14) which, in view of (3), will never exceed the Planck mass. It follows in this case that, in order to avoid producing a cosmological mass excess, the value of $`m_\sigma `$ in this formula should not exceed a limit that works out to be of the order of $`10^9`$, and the limit is even be smaller, $`m_\sigma 10^{11}`$, when the two scales $`m_\sigma `$ and $`m_x`$ are comparable. The new point to which I wish to draw attention here is that for the strictly chiral case, as characterised by (11) instead of (10), the formula (2) for the vorton mass gives a typical value $$Mm_\mathrm{x}^{\mathrm{\hspace{0.17em}2}}m_\sigma ^{3/2},$$ (15) which is greater than the what is given by the usual formula (14) by a factor $`m_\mathrm{x}^{\mathrm{\hspace{0.17em}1}/2}m_\sigma ^{3/4}`$. Although the vorton to photon number density ratio (12) will not be affected, the corresponding mass density $`\rho =Mn`$ of the vorton distribution will be augmented by the same factor $`m_\mathrm{x}^{\mathrm{\hspace{0.17em}1}/2}m_\sigma ^{3/4}`$. This augmentation factor will be expressible simply as $`m_\sigma ^{1/4}`$ when the two scales $`m_\sigma `$ and $`m_x`$ are comparable, in which case the requirement that a cosmological mass should be avoided leads to the rather severe limit $`m_\sigma \mathrm{}<10^{14}`$. This mass limit works out to be of the order of a hundred TeV, which is within the range that is commonly envisaged for the electroweak symmetry breaking transition. The foregoing conclusion can be construed as meaning that if strictly chiral current carrying strings were formed (within the framework of some generalised, presumably supersymmetric, version of the Standard electroweak model) during the electroweak symmetry breaking phase transition, then the ensuing vorton population might conceivably constitute a significant fraction of the cosmological dark matter distribution in the universe. Although, according to (12), the number density of such chiral vortons would be rather low, their typical mass, as given according to (15) by $`M\sqrt{m_\sigma }`$ would be rather large, about $`10^7`$ in Planck units, which works out as about $`10^9`$ TeV. An alternative kind of scenario that naturally comes to mind is that in which the cosmic strings themselves were formed at an energy scale $`m_\mathrm{x}`$ in the GUT range (of the order of $`10^3`$ in Planck units) but in which the current did not condense on the string until the thermal energy scale had dropped to a value $`m_\sigma `$ that was nearer the electroweak value (below of $`10^{10}`$ in Planck units). However since this very much lower condensation temperature would be outside the friction dominated range characterised by (3) would not be applicable. Preliminary evaluations of (relatively inefficient) vorton production that would arise from current condensation after the end of the friction dominated period are already available for the usual random walk case, but analogous estimates for aumentation that might arise in the strictly chiral case have not yet been carried out. The reason why it is not so easy to evaluate the consequences of current condensation after the end of the friction dominated epoch (when radiation damping becomes the main dissipation mechanism) is that most of the loops present at the time of the current condensation would have been be too small to give vortons stable against quantum decay processes, a requirement which imposes a lower limit $$M\mathrm{}>m_\mathrm{x}^{\mathrm{\hspace{0.17em}2}}/m_\sigma $$ (16) on the mass of a viable vorton. This condition is satisfied automatically by the masses estimated in the manner described above for vortons formed by condensation during the friction dominated era characterised by (3). On the other hand when (3) is not satisfied – in which case the lower limit (16) will evidently exceed the Planck mass – then the majority of loops present at the time of the carrier condensation phase transition at the temperature $`\mathrm{\Theta }m_\sigma `$ will not acquire the rather large quantum number values that would be needed to make them ultimately viable as vortons. It is not at all easy to obtain firmly conclusive estimates of the small fraction that will satisfy this viability condition. However it should not be too difficult to carry out an adaptation to the strictly chiral case of the kind of tentative provisional estimates (based on simplifying assumptions whose confirmation will require much future work) that have already been provided for the generic of currennts built up by the usual random walk process. The possibility of strictly chiral current formation is not the only mechanism whereby vorton formation might conceivably be augmented relative to what was predicted on the basis of the previous estimates, which took no account of electromagnetic effects. There cannot be any electromagnetic coupling in the strictly chiral case , and in other cases where electromagnetic coupling will be typically be present it has been shown that it will usually have only a minor perturbing effect on the vorton equilibrium states. However it has recently been remarked that even though the averaged “direct” current that is relevant for vorton formation may be small, the local ‘alternating’ current can have a sufficiently large amplitude, $`I`$ say, for its interaction with the surrounding black body radiation plasma to provide the dominant friction damping mechanism, with a damping scale that instead of (7) will be given in rough order of magnitude by $`\tau m_\mathrm{x}^{\mathrm{\hspace{0.17em}2}}I^1\mathrm{\Theta }^2`$. This means that instead of being restricted to the very early epoch when cosmological temperature was above Kibble limit value, i.e. when $`\mathrm{\Theta }\mathrm{}>\sqrt{m_\mathrm{x}}`$, the period of friction domination can be extended indefinitely if the current amplitude satisfies $`I\mathrm{}>m_\mathrm{x}^{\mathrm{\hspace{0.17em}2}}`$, a requirement that is easily compatible with Witten’s current saturation bound $`I\mathrm{}<em_\mathrm{x}`$ that applies (where the $`e1/\sqrt{137}`$ is the electromagnetic charge coupling constant) and in most cases even with the more severe limit $`I\mathrm{}<em_\sigma `$ that applies in cases for which instead of arising as a bosonic condensate, the current is due to femionic zero modes. Such a tendency to prolongation of friction dominance will presumably delay the decay of small scale loop structure and so may plausibly be expected to augment the efficiency of vorton formation in cases when $`m_\sigma `$ is below the limit given by (3), but a quantitative estimate of just how large this effect is likely to be will require a considerable amount of future work. Despite the possibility that the effciency of vorton formation may have been underestimated by previous work, it still seems unlikely that vortons can constitute more that a small fraction of the missing matter in the universe. However this does not mean that vortons could not give rise to astrophysically interesting effects: in particular it has recently been suggested by Bonazzola and Peter that they might account for otherwise inexplicable cosmic ray events. The author is grateful to many colleagues, particularly Anne Davis and Patrick Peter, for helpful discussions on numerous occasions.
no-problem/9909/hep-ph9909331.html
ar5iv
text
# 1 Dependence of the ‘massless parton’-type charm structure function 𝐹₂^{𝑐,𝑀⁢𝑃} in Eq. () on the switching scale 𝑄₀ in the boundary conditions in Eq. (). The solid lines represent the central value of 𝑄₀=𝑚_𝑐 and 𝐹₂^{𝑐,𝑀⁢𝑃} decreases monotonically with increasing 𝑄₀. The underlying parton distributions below 𝑄₀ are those of Ref. []. ## Acknowledgements We thank E. Reya and I. Schienbein for valuable discussions and for carefully reading the manuscript. The work has been supported in part by the ‘Bundesministerium für Bildung, Wissenschaft, Forschung und Technologie’, Bonn.
no-problem/9909/cond-mat9909136.html
ar5iv
text
# Micromagnetic simulations of thermally activated magnetization reversal of nanoscale magnets ## Abstract Numerical integration of a stochastic Landau-Lifshitz-Gilbert equation is used to study dynamic processes in single-domain nanoscale magnets at nonzero temperatures. Special attention is given to including thermal fluctuations as a Langevin term, and the Fast Multipole Method is used to calculate dipole-dipole interactions. It is feasible to simulate these dynamics on the nanosecond time scale for spatial discretizations that involve on the order of $`10^4`$ nodes using a desktop workstation. The nanoscale magnets considered here are single pillars with large aspect ratio. Hysteresis-loop simulations are employed to study the stable and metastable configurations of the magnetization. Each pillar has magnetic end caps. In a time-dependent field the magnetization of the pillars is observed to reverse via nucleation, propagation, and coalescence of the end caps. In particular, the end caps propagate into the magnet and meet near the middle. A relatively long-lived defect is formed when end caps with opposite vorticity meet. Fluctuations are more important in the reversal of the magnetization for fields weaker than the zero-temperature coercive field, where the reversal is thermally activated. In this case, the process must be described by its statistical properties, such as the distribution of switching times, averaged over a large number of independent thermal histories. The effect of temperature on the switching behavior of nanoscale magnets can be quite strong when external fields are applied that are just below the zero-temperature coercive threshold. Under these conditions, thermal fluctuations can provide enough energy to take the magnetization of the system over the barrier that prevents it from aligning with the external field . These issues are important for understanding data integrity and high-speed switching in single-domain magnetic applications. Nanoscale magnets are modeled using the traditional Landau-Lifshitz-Gilbert equation , $$\frac{d𝐌_i}{dt}=\frac{\gamma _0}{1+\alpha ^2}𝐌_i\times \left(𝐇_𝐢\frac{\alpha }{M_S}𝐌_i\times 𝐇_i\right),$$ (1) whereby the microscopic dipoles $`𝐌_i`$ precess under their individual, locally observed applied fields $`𝐇_i`$. The universal gyromagnetic factor is $`\gamma _0`$=$`1.76\times 10^7`$ Hz/Oe, while the material parameters were selected to match those of bulk iron with a saturation magnetization $`M_S`$=$`1700\mathrm{emu}/\mathrm{cm}^3`$, exchange length $`l_x`$=$`2.6\mathrm{nm}`$, and damping parameter $`\alpha `$=$`0.1.`$ For the results considered here, these fields are composed of contributions from a uniform field external to the system, the exchange interaction with neighboring dipoles, and dipole-dipole interactions with all of the other dipoles in the system. Calculation of the latter is dramatically accelerated by using a fast multipole algorithm . The numerical models are based on real nanomagnets that have been fabricated recently using scanning microscopy techniques . Thermal effects are incorporated by adding a random contribution to the local field of each spin, as first proposed by W. F. Brown nearly forty years ago. He calculated the fluctuation-dissipation theorem for a Stratonovich-type noise by considering the Fokker-Planck formulation of the problem. Numerically we implemented this noise using an Itô-type random field. The difference between the Stratonovich and Itô paradigms does not affect the results since we normalize the dipoles to a fixed length after each integration step. The individual magnets, rectangular with dimensions $`9\mathrm{nm}\times 9\mathrm{nm}\times 150\mathrm{nm},`$ were implemented using a finite-difference approach. The numerical results have been found to be independent of discretization for the spatial discretization of $`\mathrm{\Delta }x`$=$`1.5\mathrm{nm}`$ and integration step of $`\mathrm{\Delta }t`$=$`50\mathrm{fs}`$ used here. The large shape anisotropy of these magnets causes spontaneous alignment of the magnetization, except at the top and bottom where pole avoidance leads to the formation of end caps. As can be seen in Fig. 1(a), when the external field is applied in the opposite direction during a hysteresis measurement, the $`z`$-component of the magnetization is reduced via growth of these end caps. Here the uniform applied field points down, light shades indicate upward-pointing magnetization, and dark shades indicate downward-pointing magnetization. At zero temperature the end caps grow symmetrically; when they meet at the middle of the pillar a relatively long-lived defect is formed due to the opposite helicities of the two end caps. These results are fully consistent with the $`T=0`$ simulations of Ref. . There the field was swept quasi-statically as opposed to the truly dynamic sweeps considered here. The corresponding hysteresis loop, with a period of $`1`$ ns, is shown in Fig. 2. The defect is indicated by slow decay of the magnetization around the time when the external field reverses. The effect of this defect is also apparent for the longer-period hysteresis curves, for which the defect disappears before the field reverses. The simulated zero-temperature hysteresis loops are very reproducible, and no differences are seen for subsequent periods. The inset shows example variations that occur from thermal effects for the $`1`$ ns hysteresis loop. The largest differences are seen in conjunction with the defect that forms from the two end caps. The simplest magnetization-reversal situation to study is one in which the external field suddenly changes its orientation, and then remains constant. In these simulations, the field is initially zero and then is brought negative in $`0.25`$ ns with its amplitude described by $`1/4`$ of a sine wave. In what follows we set $`t`$=$`0`$ at the time when the field first reaches its maximum negative value. When the final field is less than the coercive field, the magnetization remains oriented upward until thermal fluctuations take it over the associated free-energy barrier. In long pillars the end caps do not interact strongly, and the free energy of each can be considered separately. The free energy as a function of end cap volume has essentially three extrema: one local minimum corresponding to a small end cap, one local maximum corresponding to an unstable volume where the tendency for shrinkage is equal to the tendency for growth, and the global minimum corresponding to a switched magnet (spanning end cap). Snapshots from the magnetization reversal of one magnet in this situation at $`H`$=$`1850`$ Oe and $`20`$ K are shown in Fig. 1(b). Here the lower end cap undergoes a large fluctuation to take it past the critical volume, after which it grows at an almost constant rate to fill the entire magnet. (Again a long-lived defect forms when the two end caps come into contact.) The thermally-activated nature of the end cap growth leads to a distribution in the switching times, defined as the time when $`M_z`$=$`0`$. The probability of not switching, $`P_{\mathrm{not}}(t)`$, for $`85`$ switches at $`H=1800`$ Oe and $`20`$ K is shown as the heavy, stepped curve in Fig. 3. Under these conditions, the majority of simulated switches occur between $`0.5`$ and $`1.2`$ ns after the field reversal finishes. There are no switches before $`0.4`$ ns because it takes this amount of time for a single supercritical end cap to grow to fill half of the magnet. The traces of the average pillar magnetization density as a function of time are shown for five different switches in the inset of Fig. 3. There are essentially two slopes observed during the actual switching process, corresponding to cases where one or both end caps are growing. The observation that the end caps decay essentially independently and exponentially, with rate $`\rho `$, and that freely growing end caps change the global magnetization (normalized to lie between $`\pm 1`$) at a constant rate $`v`$, can be used to construct a simple model to describe the distribution of switching times observed in individual experiments. The resulting probability of not switching is $$P_{\mathrm{not}}\left(t\right)=\{\begin{array}{cc}1\hfill & \hfill t<1/(2v)\\ e^{(2\rho t\rho /v)}\left(1+2\rho t\rho /v\right)\hfill & \hfill 1/(2v)t<1/v\\ e^{(2\rho t\rho /v)}\left(1+\rho /v\right)\hfill & \hfill 1/vt\end{array}.$$ (2) Taking $`\rho `$ and $`v`$ as parameters, a nonlinear fit of this two-exponential decay theory to the first two moments of the data is shown as the dashed line in Fig 3. For comparison, a similar fitting has also been performed for an error function form with two parameters (corresponding to a Gaussian histogram of switching times), which is shown as the dotted curve in Fig. 3. From the $`85`$ switches presented here there is no clear advantage to either fitting function. Different combinations of field and temperature should probe regions where the fitting functions are not so similar. In summary, Landau-Lifshitz-Gilbert dynamics have been simulated for three-dimensional models of single-domain nanoscale magnets of large aspect ratio. Hysteresis-loop and field-reversal simulations show that magnetization reversal occurs through the nucleation, growth, and coalescence of the end caps. For field-reversal simulations at nonzero $`T`$ which require thermal fluctuations to complete the reversal, a simple theory that considers the nucleation rate and growth velocity of the end caps adequately describes the statistical distribution of the switching times. Supported by NSF grant No. DMR-9871455, NERSC, and by FSU/SCRI and FSU/MARTECH.
no-problem/9909/solv-int9909027.html
ar5iv
text
# On Two Aspects of the Painlevé Analysis ## 1 Introduction The Painlevé analysis is a simple and reliable tool for testing the integrability of nonlinear ODEs and PDEs . Concerning PDEs, there is strong empirical evidence—though no proven theorem as yet—that every nonlinear equation possessing the Painlevé property in formulation for PDEs inevitably belongs either to the class of $`C`$-integrable equations (exactly linearizable equations, including Darboux integrable ones) or to the class of $`S`$-integrable equations (Lax integrable equations, including Liouville integrable bi-Hamiltonian ones). The Painlevé analysis of nonlinear PDEs is usually performed along the so-called Weiss–Kruskal algorithm, which combines the Weiss’ singular expansions of solutions around movable hypersurfaces and the Kruskal’s simplifying representation for singularity manifolds , and which follows step by step the Ablowitz–Ramani–Segur algorithm for ODEs . The very first step of the Weiss–Kruskal algorithm, however, has no counterpart in the Ablowitz–Ramani–Segur algorithm: starting the Painlevé analysis of a nonlinear PDE, one must determine which of analytic hypersurfaces are characteristic for the tested equation, in order to perform the whole subsequent analysis of solutions around non-characteristic hypersurfaces only. Ward was first who stated and substantiated that the Painlevé property for PDEs must not fix any structure of solutions at characteristic hypersurfaces. Afterward, the essence of Ward’s statement was mentioned as “a fact tacitly assumed by all Painlevé practitioners” . Lately, however, Weiss declared that his result “runs counter to the observation of Ward” and that “expansions about characteristic manifolds are required to be single-valued” as functionals of data. In the present paper, in Section 2, we show that the Ward’s definition of the Painlevé property for PDEs still remains well-founded and that the objections of Weiss are caused by some terminological confusion. We do this via the singularity analysis of the Calogero equation : $$u_{xxxy}2u_yu_{xx}4u_xu_{xy}+u_{xt}=0.$$ (1) This nonlinear PDE (1) is useful to illustrate one more aspect of the Painlevé analysis. In Section 3, we find two different Bäcklund transformations of the Calogero equation (1) into itself: one follows from the truncated singular expansion for $`u`$, the other one follows from the Lax pair of (1), and the former turns out to be a special case of the square of the latter. Consequently, the Painlevé analysis does not lead to the simplest, or elementary, auto-Bäcklund transformation of (1), a phenomenon similar to what was observed in . Section 4 contains concluding remarks. ## 2 Breaking Solitons and the Painlevé Property Let us take the fourth-order three-dimensional nonlinear PDE (1) and assume for a while that we know nothing about its integrability and solutions. Does this equation (1) pass the Painlevé test for integrability? The answer will be “yes”, if we adopt the Ward’s definition of the Painlevé property for PDEs. But the answer will be “no”, if we change the definition as proposed by Weiss . It is an easy task to perform the Painlevé analysis of (1) along the Weiss–Kruskal algorithm. A hypersurface $`\varphi (x,y,t)=0`$ is non-characteristic for the PDE (1) if $`\varphi _x^3\varphi _y0`$ (see, e.g., for the definition and meaning of non-characteristic hypersurfaces). The Kruskal’s ansatz $`\varphi =x+\psi (y,t)`$ with $`\psi _y0`$ both simplifies calculations and excludes characteristic hypersurfaces from consideration. The assumption that the dominant behavior of solutions is algebraic around $`\varphi =0`$, that is $`u=u_0(y,t)\varphi ^p+\mathrm{}`$, leads to the only branch to be tested: $`p=1`$ with $`u_0=2`$. (Branches with $`p=0,1,2,3`$, also admitted by (1), need no analysis: they are governed by the Cauchy–Kovalevskaya theorem because the Kovalevskaya form of the PDE (1) is analytic everywhere.) Then we substitute $`u=2\varphi ^1+\mathrm{}+u_r(y,t)\varphi ^{r1}+\mathrm{}`$ into (1), find that $`u_r`$ is not determined if $`r=1,1,4,6`$ ($`r=1`$ corresponds to the arbitrariness of $`\psi `$), and conclude that the tested branch is generic. Finally, we substitute $`u=_{i=0}^{\mathrm{}}u_i(y,t)\varphi ^{i1}`$ into (1), find recursion relations for $`u_i`$, and check compatibility conditions at the resonances, where the arbitrary functions $`u_1`$, $`u_4`$ and $`u_6`$ appear. All the compatibility conditions turn out to be identities. Consequently, the PDE (1) has passed the Weiss–Kruskal algorithm well. Since this algorithm is sensitive to algebraic and non-dominant logarithmic singularities only, we can only conjecture that the tested equation possesses the Painlevé property in the sense that all solutions of (1) are single-valued around all non-characteristic hypersurfaces. And we should expect (1) to be integrable. The PDE (1) is integrable indeed . It arises as the compatibility condition for the over-determined, linear in $`\mathrm{\Phi }`$, system $$\mathrm{\Phi }_{xx}+(\alpha u_x)\mathrm{\Phi }=0,$$ (2) $$\mathrm{\Phi }_t+4\mathrm{\Phi }_{xxy}2u_y\mathrm{\Phi }_x4u_x\mathrm{\Phi }_y3u_{xy}\mathrm{\Phi }=0,$$ (3) where the spectral parameter $`\alpha `$ is any function $`\alpha (y,t)`$ satisfying the equation $$\alpha _t=4\alpha \alpha _y.$$ (4) All solutions of (4), except $`\alpha =\text{constant}`$, are multi-valued functions: for any non-constant initial value $`\alpha (y,0)`$, the nonlinear wave $`\alpha =\alpha (y,t)`$ inevitably breaks (overturns, overlaps) at some finite $`t`$ . Therefore solutions of (1), obtainable by the inverse scattering transform with any non-constant $`\alpha `$, are multi-valued functions as well. For example, the one-soliton solution of (1), $$u=2\lambda \mathrm{tanh}(\lambda x+\mu )+\beta ,$$ (5) where $`\lambda (y,t)`$, $`\mu (y,t)`$ and $`\beta (y,t)`$ are any functions satisfying the equations $`\lambda _t+4\lambda ^2\lambda _y=0`$ ($`\lambda ^2=\alpha `$) and $`\mu _t+4\lambda ^2\mu _y=2\lambda \beta _y`$, becomes a multi-valued function when $`\alpha `$ breaks . The $`N`$-soliton solution of (1), determined by $`N`$ solutions $`\alpha _1,\mathrm{},\alpha _N`$ of (4), breaks whenever any of $`\alpha _1,\mathrm{},\alpha _N`$ breaks . At first sight, such a complicated branching of solutions of (1) seems to be incompatible with the Painlevé property. Nevertheless, there is no contradiction between the fact that solutions of (1) are multi-valued functions and the fact that solutions of (1) are single-valued around all non-characteristic hypersurfaces: solutions can branch and do branch at characteristic hypersurfaces only. Indeed, it was noticed and stressed in that solutions of (1) break (i.e. $`u_y\mathrm{}`$ at finite values of $`u`$) for all values of $`x`$ simultaneously, the fact meaning that the corresponding singularity manifolds $`\varphi =0`$ are characteristic for (1), $`\varphi _x=0`$. Consequently, the breaking solitons do not break the Painlevé property in the Ward’s formulation because they never break at non-characteristic hypersurfaces. Let us proceed to the Weiss’ objections against the Ward’s formulation of the Painlevé property for PDEs. The Weiss’ counter-example is “the expansion about the characteristic manifold$`u=u_0(t)+_{i=3}^{\mathrm{}}u_i(t)\varphi ^i`$ with $`\varphi =x+\psi (t)`$ for the equation $`u_{xxx}=\frac{3}{2}u_x^1u_{xx}^2+ku_xu_t`$, where $`k=\text{constant}`$. This expansion, however, does not represent solutions around characteristic hypersurfaces: characteristic hypersurfaces for this equation are determined by the condition $`\varphi _x=0`$, not by $`\varphi _x=1`$. Actually, this Taylor expansion represents solutions around any non-characteristic hypersurface $`\varphi =0`$ in the important case when the Cauchy–Kovalevskaya theorem does not work: in this case we have $`u_x=0`$ at $`\varphi =0`$, whereas the Kovalevskaya form of the equation is singular at $`u_x=0`$. We can only agree with Weiss that the consideration of such special Taylor expansions is an essential part of the Painlevé analysis, but the Weiss’ words “the expansion about the characteristic manifold” turn out to be too misleading because no actual expansions around characteristic hypersurfaces can be found in the papers . Now let us return to the Calogero equation (1) and see what really happens at the characteristic hypersurfaces $`\varphi (x,y,t)=0`$ with $`\varphi _x\varphi _y=0`$. When, for example, $`\varphi _x=0`$ and $`\varphi _y0`$, we take $`\varphi =y+\psi (t)`$ and $`u=u_0(x,t)\varphi ^p+\mathrm{}`$, $`p=\text{constant}`$, and find from (1) that any value of $`p`$ is admissible. Therefore the expansions will not be single-valued functionals of $`\varphi `$ for non-integer $`p`$. For example, if $`p=\frac{1}{2}`$, we get the expansion $$u=\underset{i=0}{\overset{\mathrm{}}{}}u_i(x,t)\varphi ^{(i1)/2}$$ (6) with the coefficients $`u_i`$ determined by the recursion relations $$\underset{i=0}{\overset{n}{}}(i1)[u_i(u_{ni})_{xx}+2(u_i)_x(u_{ni})_x]$$ $$\frac{1}{2}(n2)[(u_{n1})_{xxx}+\psi _t(u_{n1})_x](u_{n3})_{xt}=0,$$ (7) where $`n=0,1,2,\mathrm{}`$, and $`u_i=0`$ at $`i<0`$. The structure of (7) differs from the habitual structure of recursion relations for non-characteristic hypersurfaces very considerably. There are no resonances in (7), but the expansion (6) contains infinitely many arbitrary functions of $`t`$ in addition to $`\psi (t)`$: they arise pair by pair as “constants” of integration of (7) because (7) is a second-order ODE in $`u_n`$ for every $`n`$. Namely, $`u_0=(\sigma _0x+\tau _0)^{1/3}`$, $`u_1=\frac{5}{36}\sigma _0(\sigma _0x+\tau _0)^1+\sigma _1(\sigma _0x+\tau _0)^{1/3}+\tau _1+\frac{1}{4}\psi _tx`$, $`u_2=\sigma _2(\sigma _0x+\tau _0)^{1/3}+\tau _2(\sigma _0x+\tau _0)^{2/3}`$, etc., where $`\sigma _i(t)`$ and $`\tau _i(t)`$ are arbitrary functions, $`i=0,1,2,\mathrm{}`$. We see that the expansion (6) is multi-valued both as a function of $`x,y,t`$ (via coefficients $`u_i`$ and non-integer degrees of $`\varphi `$) and as a functional of $`\varphi `$. Consequently, if we accept the Weiss’ formulation of the Painlevé property, the integrable Calogero equation (1) will not pass the Painlevé test for integrability. Evidently, the Weiss’ formulation asks too much of the tested equation. ## 3 Two auto-Bäcklund transformations Let us try to find a Bäcklund transformation of the Calogero equation (1) into itself. Two different methods will lead us to two different transformations. Then we will find a relation between the two results. First we employ the method of truncated singular expansions of Weiss and the new expansion function $`\chi =(\varphi ^1\varphi _x\frac{1}{2}\varphi _x^1\varphi _{xx})^1`$ of Conte which simplifies computations very considerably (note also that the Kruskal’s ansatz is not used for $`\varphi `$ in what follows). We substitute $`u=g(x,y,t)\chi ^1+f(x,y,t)`$ into the Calogero equation (1) and find that $`g=2`$ and that $`\varphi `$ and $`f`$ must satisfy the following system of four equations: $$d2c(s+2f_x)+2f_y=0,$$ (8) $$d_x\frac{1}{2}c(s_x+2f_{xx})2c_x(s+2f_x)+2f_{xy}=0,$$ (9) $$d_{xx}+dsc_x(s_x+2f_{xx})2(c_{xx}+cs)(s+2f_x)s_{xy}+2sf_y=0,$$ (10) $$s_{xxy}+f_{xxxy}+(c_{xx}+cs)(s_x+2f_{xx})2f_y(s_x+f_{xx})$$ $$4(s+f_x)(s_y+f_{xy})+2ss_y+s_t+f_{xt}=0,$$ (11) where $`s=\varphi _x^1\varphi _{xxx}\frac{3}{2}\varphi _x^2\varphi _{xx}^2`$, $`c=\varphi _x^1\varphi _y`$, and $`d=\varphi _x^1\varphi _t`$. Substituting (8) into (9), we get $`s_x+2f_{xx}=0`$ which leads to $`s+2f_x=2\alpha `$, where the function $`\alpha (y,t)`$ appears as a “constant” of integration. Then (8) changes into $`d4\alpha c+2f_y=0`$, (10) is satisfied identically, and $`\alpha _t=4\alpha \alpha _y`$ follows from (11) (that is why we use the same letter $`\alpha `$ as for the spectral parameter). Consequently, the system (8)–(11) is equivalent to the system of two equations $$\begin{array}{c}\varphi _{xxx}\frac{3}{2}\varphi _x^1\varphi _{xx}^2+2\varphi _xf_x2\alpha \varphi _x=0,\\ \varphi _t2\varphi _xf_y4\alpha \varphi _y=0,\end{array}$$ (12) where $`\alpha (y,t)`$ is any solution of (4). The truncated expansion $$u=\varphi _x^1\varphi _{xx}2\varphi ^1\varphi _x+f$$ (13) is a Miura transformation of the system (12) into the equation (1). One more Miura transformation of (12) into (1), namely, $$v=\varphi _x^1\varphi _{xx}+f,$$ (14) where $`v`$ satisfies (1), follows from (13) automatically . This chain of two Miura transformations (13) and (14) generates an auto-Bäcklund transformation for (1). Indeed, eliminating $`\varphi `$ and $`f`$ from (13) and (14) by means of (12) and differentiations, we get the following system: $$w_{xx}\frac{1}{2}w^1w_x^2wz_x+\frac{1}{8}w^3+2\alpha w=0,$$ (15) $$z_{xy}w^1(w_xz_y+4\alpha w_yw_t)\frac{1}{2}ww_y4\alpha _y=0,$$ (16) where $`w=uv`$, $`z=u+v`$, and $`\alpha (y,t)`$ is any solution of (4). Direct but tedious computations prove that the system (15)–(16) is compatible in $`v`$ if $`u`$ satisfies (1), and that the system is compatible in $`u`$ if $`v`$ satisfies (1) (i.e. one gets (1) for $`u`$ when eliminates $`v`$ from (15)–(16) by differentiations, and vice versa). Therefore, according to the definition , the system (15)–(16) is a Bäcklund transformation of the PDE (1) into itself. It looks strange, however, that the $`x`$-part (15) of the obtained Bäcklund transformation is a second-order ODE, whereas the equation (2) of the associated linear problem for the PDE (1) is the same as for the potential KdV equation $`u_t=u_{xxx}3u_x^2`$. Let us apply the method of Chen to the linear problem (2)–(3) and find that the PDE (1) does admit one more auto-Bäcklund transformation with the same first-order $`x`$-part as for the potential KdV equation. We rewrite (2) as $`u_x=(\mathrm{\Phi }_x/\mathrm{\Phi })_x+(\mathrm{\Phi }_x/\mathrm{\Phi })^2+\alpha `$, introduce the new variable $`\omega `$ such that $`\omega _x=(\mathrm{\Phi }_x/\mathrm{\Phi })^2+\alpha `$, and get in this way $`u=\omega \pm \epsilon `$, where $`\epsilon =(\omega _x\alpha )^{1/2}`$. Then (3) gives us the following fourth-order PDE for $`\omega `$: $$\epsilon _{xxy}2(\omega _y\epsilon )_x4\alpha \epsilon _y+\epsilon _t=0.$$ (17) It is very essential that (17) is one and the same equation for both choices of the sign in $`u=\omega \pm \epsilon `$. Owing to this fact, we have two Miura transformations of (17) into (1), namely, $`u`$ $`=\omega +(\omega _x\alpha )^{1/2},`$ (18) $`v`$ $`=\omega (\omega _x\alpha )^{1/2},`$ (19) where $`u`$ and $`v`$ are solutions of (1) if $`\omega `$ satisfies (17). Eliminating $`\omega `$ from (18), (19) and (17), we get $$z_x\frac{1}{2}w^22\alpha =0,$$ (20) $$w_{xxy}w_xz_y(w^2+4\alpha )w_y+w_t2\alpha _yw=0,$$ (21) where $`w=uv`$, $`z=u+v`$, and $`\alpha (y,t)`$ is any solution of (4). One can check that the system (20)–(21) is compatible in $`v`$ if $`u`$ satisfies (1), and vice versa. Therefore the equations (20) and (21) constitute a Bäcklund transformation of the PDE (1) into itself. We have obtained two auto-Bäcklund transformations for the Calogero equation (1): (15)–(16) and (20)–(21). The two transformations are different both in their form, what is evident, and in solutions $`u`$ they generate from a given solution $`v`$. For example, if $`v`$ is any function $`\gamma (y,t)`$, we find from (20)–(21) that the corresponding $`u`$ is the one-soliton solution (5) with $`\beta =\gamma `$, whereas the transformation (15)–(16) leads for this $`v`$ either to the soliton (5) with $`\beta =\gamma 2\lambda `$ or to the more complicated solution $$u=8\lambda [\mathrm{cosh}(\lambda x+\mu )]^2\{\mathrm{sinh}[2(\lambda x+\mu )]+2(\lambda x+\nu )\}^1+\gamma ,$$ (22) where the functions $`\lambda (y,t)`$, $`\mu (y,t)`$ and $`\nu (y,t)`$ are any solutions of the equations $`\lambda _t+4\lambda ^2\lambda _y=0`$ ($`\lambda ^2=\alpha `$), $`\mu _t+4\lambda ^2\mu _y=2\lambda \gamma _y`$ and $`\nu _t+4\lambda ^2\nu _y8\lambda \lambda _y\nu =2\lambda \gamma _y8\lambda ^2\mu _y`$. (For completeness, we should also mention the generated solutions $`u`$ with $`u_x=0`$: $`u=\gamma 2\lambda `$ for (20)–(21), and $`u=\gamma `$ and $`u=\gamma 4\lambda `$ for (15)–(16), $`\lambda ^2=\alpha `$.) Nevertheless, these two different auto-Bäcklund transformations are related to each other: the transformation (15)–(16) is nothing but a special case of the square of the transformation (20)–(21). More precisely, if functions $`a`$, $`b`$ and $`q`$ are such that $`u=a`$ and $`v=q`$ satisfy the system (20)–(21) with some spectral parameter $`\alpha `$, and $`u=q`$ and $`v=b`$ satisfy the system (20)–(21) with the same $`\alpha `$, then $`u=a`$ and $`v=b`$ satisfy the system (15)–(16) with the same spectral parameter $`\alpha `$. Indeed, eliminating $`q`$ from the relations $`a_x+q_x=\frac{1}{2}(aq)^2+2\alpha _1`$ and $`q_x+b_x=\frac{1}{2}(qb)^2+2\alpha _2`$, we get (15) for $`u=a`$ and $`v=b`$ if and only if $`\alpha _1=\alpha _2=\alpha `$, and (16) follows from (21) in the same way. Therefore our words “a special case of the square” mean that the transformation (15)–(16) is composed of two transformations (20)–(21) with equal spectral parameters. We have shown that the method of truncated singular expansions does not lead to the simplest auto-Bäcklund transformation of the Calogero equation (1), related to the equation’s Lax pair, and one may only guess that the transformation (20)–(21) can be derived from the transformation (15)–(16). ## 4 Conclusion In this paper, we used the Calogero equation to illustrate the following two aspects of the Painlevé analysis of nonlinear PDEs. First, if a nonlinear equation passes the Painlevé test for integrability, the singular expansions of its solutions around characteristic hypersurfaces can be neither single-valued functions of independent variables nor single-valued functionals of data. Of course, if the Painlevé property is considered as an abstract analytic property, one may give any definition of it. However, if the Painlevé property is defined to be used as an indicator of integrability of nonlinear equations, the adequacy of its definition becomes an experimental result. By the singularity analysis of the Calogero equation, we have shown that the Ward’s definition of the Painlevé property for PDEs is well founded. Second, if the truncation of singular expansions of solutions is consistent, the truncation not necessarily leads to the simplest, or elementary, auto-Bäcklund transformation related to the Lax pair. We have found two different Bäcklund transformations of the Calogero equation into itself: one follows from the truncated singular expansion, the other one follows from the Lax pair, and the former turns out to be a special case of the square of the latter. In other words, the way from the truncated singular expansions to Bäcklund transformations and Lax pairs is not so straightforward as it is sometimes stated in the literature.
no-problem/9909/cond-mat9909167.html
ar5iv
text
# REFERENCES Clusters of interstitial carbon atoms near the graphite surface as a possible origin of dome-like features observed by STM V.F. Elesin and L.A. Openov Moscow State Engineering Physics Institute (Technical University), Moscow, 115409, Russia Abstract > Formation of clusters of interstitial carbon atoms between the surface and second atomic layers of graphite is demonstrated by means of molecular dynamics simulations. It is shown that interstitial clusters result in the dome-like surface features that may be associated with some of the hillocks observed by STM on the irradiated graphite surface. 1. Introduction Ion irradiation of graphite surface results in formation of defects that are seen by scanning tunneling microscopy (STM) . A single-ion impact creates a hillock whose nature still remains controversial since STM probes the electronic states of the surface rather than the actual surface topography. So, it is difficult to deduce the atomic structure in the vicinity of defects based on STM data alone. A number of suggestions have been made concerning the interpretation of hillocks seen by STM on graphite surfaces irradiated by low-energy ($`100`$ eV) ions, including, e.g., the incident ions trapped between the surface and second layer of atoms in graphite , the vacancies on the graphite surface , self-atom implantation , etc. It seems, however, that in many cases the formation of hillocks involves carbon atoms only. Recent molecular dynamics simulations indicate that carbon interlayer interstitials below the surface layer account for $``$ 50 $`\%`$ of all defects produced by incident and secondary recoils with the energy $`E=300`$ eV $`÷`$ 3 keV, while $`100`$ $`\%`$ of defects produced by secondary recoils with the energy $`E=100`$ eV $`÷`$ 300 eV are single carbon interstitials. It was noticed in that the single interstitials may migrate to form clusters between the layers, which may be the source of some of the hillocks. The authors of arrived at a conclusion that, at most, half of the surface defects may be formed by interstitial carbon clusters. As to high-energy ($`100`$ keV) ion implantation, the hillock shape of the defects was attributed by several authors to the formation of single carbon interstitials and interstitial carbon clusters between the graphite layers, each hillock being originated from a single ion impact . Two types of hillocks were found in , one of which showing the undisturbed (i.e. definitely without vacancies) lateral atomic arrangement on the (0001) graphite surface, while the other one exhibited a distorted atomic structure (but also presumably without vacancies). So, it is likely that the hillocks formed under both low- and high-energy ion irradiation of graphite are mainly due to interlayer carbon interstitials and/or their clusters. If this explanation is true, a unique opportunity appears to study the processes of cluster formation and a related phenomenon of radiation-induced swelling of graphite by means of STM. Besides, there is a need for a theory of formation, structure, and arrangement of such clusters. The theory should give an answer to several key questions: 1) what is the reason for attraction of interlayer interstitials; 2) what are the physical conditions of cluster formation; 3) why the recombination of interstitials with vacancies does not inhibit the formation of clusters. The purpose of this paper is to study numerically the formation of clusters of interstitial carbon atoms in the interlayer region between the surface and second graphite layers. Recently we have shown by means of molecular dynamics simulations that interlayer carbon interstitials do form clusters in the bulk of graphite . The physical reason for this effect is the deformation interaction of interstitials mediated by the nearby graphite sheets. An interstitial stretches the lattice in its vicinity, and the stretched domain appears to be attractive for other interstitials located within the same interlayer region, see Fig.1. In other words, the formation of interstitial clusters minimizes the deformation energy relatively to the case of spatially separated interstitials due to minimization of the overall buckling of adjacent graphite sheets. It is important that mobility of interstitial clusters is much lower than that of single interstitials, and hence the recombination of interstitials with vacancies is strongly suppressed . On general grounds, one can expect the same mechanism of clusters formation to be realized near the graphite surface. Indeed, the formation of clusters of noble-gas atoms inserted between the topmost and second atomic layers of graphite has been numerically demonstrated by Marton et al. . We are not aware of similar studies on the clusters of carbon atoms. The paper is organized as follows. First we present the general theory of diffusion instability of the uniform distribution of defects in solids. Next we numerically simulate the formation of interstitial carbon clusters underneath the surface layer of graphite. Our calculations show that such clusters can indeed be formed. Their specific shapes are dictated by the initial distribution of interstitials in the interlayer region and by the strong covalent interaction between the interstitials. The interstitial clusters give rise to dome-like features on the graphite surface that are similar to those observed experimentally by STM. 2. Diffusion instability of a system of defects in solids In this Section we sketch the main features of the theory of diffusion instability that can lead to formation of clusters of defects in disordered solids, see for more details. We shall demonstrate that the uniform distribution of defects is unstable with respect to separation in defect-rich and defect-poor regions if the concentration of defects is sufficiently high and exceeds some critical value that depends on material parameters and temperature. The mechanism of instability is rather general: the deformation of the crystal lattice by defects results in anomalous flow of defects along the gradient of defect concentration $`n(𝐫,t)`$, i.e. opposite in direction to the diffusion flow of defects. In the simplest case of defects of the same type, the kinetic equation for $`n(𝐫,t)`$ is $$\frac{n}{t}=QR(n)div𝐣,$$ (1) where $`Q`$ is the source of defects, the term $`R(n)`$ accounts for recombination processes, and $`𝐣`$ is the defect current density. In the case of isotropic continuum media, the expression for $`𝐣`$ reads $$𝐣=D\frac{n}{𝐫}+\frac{D\mathrm{\Omega }n}{3T}\frac{\sigma _{ii}}{𝐫},$$ (2) where the first term corresponds to the diffusion of defects, while the second one describes the motion of defects in the field of elastic stresses $`\sigma _{ii}=div\sigma `$. Here $`D`$ is the diffusion coefficient and $`\mathrm{\Omega }`$ is the dilatation volume ($`\mathrm{\Omega }>0`$ for interstitials and $`\mathrm{\Omega }<0`$ for vacancies; $`|\mathrm{\Omega }|a^3`$, where $`a`$ is the lattice constant). In turn, average elastic stresses $`\sigma _{ii}`$ caused by defects satisfy the following equation of the theory of elasticity: $$\frac{\sigma _{ii}}{𝐫}=3K^{}\mathrm{\Omega }\frac{n}{𝐫},$$ (3) where $$K^{}=K\left(1+\frac{4\mu }{3K}\right)^1,$$ (4) $`K`$ and $`\mu `$ are elastic moduli. Substituting Eq.(3) in Eq.(2), we obtain the following expression for $`𝐣`$: $$𝐣=D\left(1\frac{\mathrm{\Omega }^2K^{}n}{T}\right)\frac{n}{𝐫}.$$ (5) It follows from Eq.(5) that the current caused by elastic stresses is directed opposite to the diffusion component of the current. This result has a simple physical meaning. Let us consider, for example, the system of defects with $`\mathrm{\Omega }>0`$. Such defects (e.g., interstitials) stretch the lattice, i.e. increase the average lattice constant, see Eq.(3). But since interstitials themselves tend to move to the stretched regions of the sample, there is an anomalous current along the gradient of defects concentration. This is what we call the deformation interaction of defects. The steady-state spatially-uniform solutions of Eqs. (1), (5) appear to be unstable at $$n>n_c=\frac{T}{\mathrm{\Omega }^2K^{}}.$$ (6) Hence, in the case that the average concentration of defects exceeds the critical concentration $`n_c`$, the agglomerations of defects (clusters) are formed. For $`T=300`$K and typical values of $`\mathrm{\Omega }510^{23}`$ cm<sup>3</sup> and $`K^{}10^{12}`$ erg/cm<sup>3</sup> we have $`n_c10^{19}`$ cm<sup>-3</sup>, this value being much smaller than the atomic concentration. We stress that the results presented above are obtained within the framework of a standard defect model in the context of the linear theory of elasticity, without use of any new assumptions and parameters. Note also that the diffusion instability is a common feature of a number of other physical systems, e.g., superconductors and excitonic insulators . Our results hold true also for defects with $`\mathrm{\Omega }<0`$, e.g., vacancies. Indeed, a vacancy compresses the lattice, i.e. reduce the average lattice constant, while the compressed domain appears to be attractive for other vacancies. If both defects with $`\mathrm{\Omega }>0`$ and $`\mathrm{\Omega }<0`$ (e.g., interstitials and vacancies respectively) are present in the sample, their effective interaction with each other via the lattice deformation is repulsive . Such a repulsion leads to spatial separation of vacancies and interstitials. Interstitials form the interstitial clusters, while vacancies form the vacancion clusters. The interstitial clusters are much less mobile than a single interstitial. As a result, the recombination is suppressed, and the concentration of defects under the influence of an external source can increase monotonously for a very long period of time without saturation . We stress that our model was formulated for the case of a finite defect concentration, and hence does not imply that, e.g., a single vacancy and a single interstitial have a repulsive interaction. It should be noted that while in isotropic crystals a characteristic range of deformation interaction is very short, about one lattice constant (and hence such an interaction can be viewed as point-like in the continuum limit ), in anisotropic crystals the range of deformation interaction can be much longer. In particular, interlayer interstitials in graphite (both impurities and carbon atoms) are known to deform atomic layers very strongly . As a corollary, the deformation interaction between the defects in graphite should play an important role in temporal evolution of the system of defects under irradiation and influence the spatial arrangement of defects over the sample. The formation of spatially separated interstitial and vacancy clusters and the corresponding suppression of recombination can explain some peculiar features of graphite swelling under irradiation . Note that other mechanisms (e.g., the specific bonding characteristics of the defects) can also contribute to the recombination suppression in graphite. The effect of interstitial carbon atoms formation in the bulk of graphite has been demonstrated numerically in making use of empirical interatomic potentials. Below we study the formation of such clusters beneath the surface graphite layer taking into account the strong covalent bonding between carbon atoms constituting the cluster. 3. Computational details Graphite was modeled by six layers of hexagonal carbon networks, each layer staggered with respect to its neighbors, see Fig.2. The layers were composed of 220 atoms each. The atoms at the periphery of each layer were fixed in order to keep the stability of the crystallite. All other atoms were allowed to move after the interstitials have been inserted into the crystallite. Initially ($`t=0`$) ten interstitial carbon atoms were positioned randomly between one of the peripheral layers (the ”surface layer”) and the penultimate layer. In the presence of interstitials, the undisturbed graphite layers correspond to a highly nonequilibrium atomic configuration. The potential energy of the system rapidly decreases in time, leading to an increase in the kinetic energy. An additional decrease of the potential energy occurs because of the formation of covalent bonds between the interstitial carbon atoms upon fusion of interstitials into clusters. In order to avoid the overheating of the crystallite, we eliminated the whole kinetic energy of atoms artificially each 70 MD steps (each period of time $`t_o=210^{14}`$ s), i.e. in fact we removed the heat from the crystallite. A rough estimate of the system temperature made on the basis of values of the maximum kinetic energy attained each period $`t_o`$, gives $`T1000`$ K at the very beginning of simulations and $`T10`$ K at the final stage (after the formation of a single interstitial cluster). We made use of empirical interatomic potentials proposed by Taji et al. to account for interaction between the carbon atoms located in graphite layers as well as between the interstitial carbon atoms and the atoms of graphite layers. Parameters of those potentials were chosen by fitting the calculated values of interatomic distances and elastic moduli to their experimental values in defect-free graphite. There are two parts in the Taji potential. The first part describes the interaction between the atoms within a graphite layer and takes the non-central force interaction into account in the form of the Keating strain energy potential. This part has an explicit minimum for angles between C-C bonds of 120<sup>o</sup> and hence treats $`sp^2`$ bonding. The second part describes the interaction between graphite layers by the central force potential. This part includes the attractive van der Waals force and the repulsive force that prevents graphite layers from merging together. The second part of the potential was applied by Taji et al. to the interactions not only between the carbon atoms located on the different layers but also between the interstitial carbon atom and other lattice atoms. For an interstitial located between graphite layers, its interaction with graphite layers, as described by the Taji potential, appears to be repulsive, i.e. the interstitial atom does not form covalent bonds with carbon atoms of the nearby graphite layers. Such a choice of the potential agrees with the experimentally observed expansion of graphite due to interstitials that gives evidence for the repulsion between interstitials and graphite layers, i.e. for the absence of covalent bonds between the interstitial atom and the graphite layers. In other words, the interstitial self-energy $`E_{is}`$ is positive. Its value $`E_{is}=`$ 1.8 eV calculated using the Taji potential agrees well with experimental data and theoretical estimates . Note that the Taji potential was originally proposed for the case that the interatomic distances and bond angles are not so far from their equilibrium values in graphite. This condition is, however, fulfilled in the simulations of formation of both a single interstitial and interstitial clusters since the distortion of the surface layer by interstitials is rather smooth, see below. Of course, the Taji potential is empirical and its use should be also justified by more sophisticated calculations. Since first principles calculations of interstitial characteristics in graphite are not known to us, we have carried out TBMD (tight binding molecular dynamics ) simulations of interstitials between the graphite layers. We have demonstrated that there is indeed a repulsive force between an interstitial and the nearby graphite layers, so that new covalent bonds are not formed and $`sp^3`$ bonding does not appear. We have also shown that there is not only qualitative, but also semi-quantitative agreement between TBMD calculations and simulations based on the Taji potential. In particular, the calculated values of the volume expansion due to an interstitial, the displacements of atoms in the nearest to an interstitial graphite layers, and the value of $`E_{is}`$ are close to those found using the Taji potential. In order to account for the covalent interaction between the interstitials we employed the TBMD method that had been proven to give a good description of small carbon clusters . 4. Results and discussion Fig.3 shows the dynamics of 10-interstitial cluster formation in the interlayer region nearest to the graphite surface. Initially ($`t=0`$) the distance between any two interstitials greatly exceeds the characteristic range of covalent interaction ($``$ 2 Å). At the first stage the motion of interstitials is governed entirely by the attractive deformation interaction . At $`t=100`$, a small 2-interstitial cluster is formed. The atoms of the cluster are tightly bound together by covalent bonding, the covalent component of the binding energy being $`E_b=`$ 2.85 eV/atom, in agreement with the experimental value for the dimer C<sub>2</sub> . At $`t=200`$, one more 2-interstitial cluster is formed. In fact, at $`t=200÷500`$ two processes take place, the formation of new diatomic clusters and adsorption of single interstitials by those clusters. The main reason for the motion of single interstitials towards the interstitial clusters is the deformation-induced attraction , as in the case of 2-interstitial clusters formation. At $`t=500`$, there are two 3-interstitial clusters and one 4-interstitial cluster in the interlayer region. Those clusters have the linear chain structure governed by the covalent interaction between the interstitials within the clusters . The covalent component of the binding energy per interstitial is $`E_b=`$ 4.73 eV/atom and $`E_b=`$ 4.75 eV/atom for 3- and 4-interstitial clusters respectively, in accordance with theoretical results of other authors and experimental values (see, e.g., ). At the next stage of the evolution ($`t=500÷5000`$), three interstitial clusters slowly move towards each other. At $`t=5000`$, two clusters merge into a 6-interstitial cluster. Note that characteristic times of 3- and 4-interstitial clusters formation are an order of magnitude shorter than the time it takes for those clusters to merge together. In other words, the mobility of an interstitial cluster decreases rapidly as the number of interstitials in the cluster increases. Finally, the 10-interstitial cluster is formed, see Fig.3h. The shape of this cluster is typical for low-dimensional carbon structures that are characterized by the bond angles 180<sup>o</sup> (carbyne) and 120<sup>o</sup> (graphite layers) and by coordination numbers $`Z=2`$ and $`Z=3`$. The covalent component of the binding energy of interstitials within the cluster is $`E_b=`$ 5.50 eV/atom. Note that the most stable geometry of an isolated 10-atom carbon cluster is a ring structure , its binding energy calculated using the same TBMD method equals to $`E_b=`$ 5.87 eV/atom. Nevertheless, we have verified that the shape and the binding energy of the 10-interstitial cluster shown in Fig.3h remained nearly unchanged upon its removal from the crystallite and subsequent relaxation. Hence this cluster, taken alone (outside the graphite) is metastable and corresponds to a local energy minimum. The deformation of the surface layer by the 10-interstitial cluster is shown in Fig.4. One can see the dome-like feature on the graphite surface. Its height is about 1.5 Å and lateral dimensions are about $`10÷12`$ Å. We suggest that some of the hillocks seen by STM on irradiated graphite surfaces may be due to relatively large (composed of $``$ 10 carbon atoms) interstitial clusters formed in the interlayer region between the surface and penultimate graphite layers at the sacrifice of deformation interaction of interstitials. The case considered by us (10 interstitials are created simultaneously in the surface area $``$ 10 Å<sup>2</sup>) obviously corresponds to rather high energy of incident ions. Note, however, that interstitials’ dynamics shown in Fig.3 may be viewed as the final stage of cluster formation from well isolated interstitials initially created far apart from each other by low-energy ions and migrated to the region confined by the crystallite due to deformation attraction and/or thermally activated diffusion. We have made molecular dynamics simulations of interstitial clusters formation for several other initial distributions of interstitials over the interlayer region nearest to the graphite surface. The results are shown in Figs.5-7. They are similar to those presented above. One can see that in either case studied all interstitials, initially located far apart from each other, merge into a single 10-interstitial cluster. The shape of the cluster is not unique, it obviously depends on the initial distribution of interstitials and hence on the specific sequence of small clusters formation and subsequent fusion into a single large cluster. However, the angles between the C-C bonds in those clusters are equal or close to either 180<sup>o</sup> or 120<sup>o</sup>, while the coordination numbers of interstitials within the clusters are $`Z=2`$ or $`Z=3`$ (with obvious exception of boundary interstitials for which $`Z=1`$), just as in the cluster shown in Fig.3h. Thus the shape of the interstitial cluster is strongly influenced by the covalent interactions between the interstitials within the cluster. The covalent components of the binding energy of interstitials with each other equal to $`E_b=`$ 5.34 eV/atom, 5.32 eV/atom, and 5.68 eV/atom for the clusters shown in Figs. 5b, 6b, and 7b respectively. Those values of $`E_b`$ are lower than the binding energy $`E_b=`$ 5.87 eV/atom of a stable 10-atom ring. Nevertheless we have found that the clusters preserve their shapes upon their removal from the crystallite, the values of $`E_b`$ being changed insignificantly. Thus all final configurations of interstitials are metastable. Distortions of the surface layer by 10-interstitial clusters are shown in Figs. 5c, 6c, and 7c. It is clearly seen that the shape of the dome-like feature on the graphite surface reflects the particular shape of interstitial cluster. The height of the hillock varies slightly from one interstitial configuration to another, its typical value being 1.5 Å. The lateral dimensions of the hillock are more sensitive to the shape of the cluster, usually being in the range 10 $`÷`$ 15 Å. Finally, we have also simulated the deformation of the graphite surface by a single carbon interstitial and by interstitial clusters composed of various numbers of interstitial carbon atoms. We have found that the calculated height of the surface hillock increases monotonously with the number of interstitials in the cluster, starting from about 0.8 Å for a single interstitial. The same holds true for characteristic lateral dimensions of the hillock. We believe that such a variation in size of hillocks may be at least a part of the reason for experimentally observed variety of features on irradiated graphite surfaces. For example, Coratger et al. have found that irradiation of HOPG graphite with 20 keV <sup>12</sup>C<sup>+</sup> ions resulted in protrusions presented an elongated form. These finding can be explained by the formation of relatively small clusters C<sub>N</sub> beneath the graphite surface since for $`N<10`$ a linear chain structure of carbon clusters ia believed either to be the most stable geometry or to lie very close in energy to a ring structure . 5. Summary and conclusions Making use of molecular dynamics simulations, we have demonstrated that interstitial carbon atoms formed in the interlayer region between the surface and second graphite layers under ion irradiation attract each other. The physical origin for such an attraction is the strong deformation of graphite layers by interstitials that makes it energetically favorable for interstitials to come closer to each other (and thus to minimize the overall buckling of graphite layers) than to stay far apart. Attractive deformation interaction of interstitials results in formation of interstitial clusters from initially isolated single interstitials. The specific shape of the cluster depends on the initial distribution of interstitials over the interlayer region and is dictated by the strong covalent interaction between the interstitials. Coordination numbers for the majority of interstitials in the cluster are $`Z=2`$ and $`Z=3`$, while all angles between the C-C bonds in the cluster are close to 180<sup>o</sup> or 120<sup>o</sup>, these values being typical for low-dimensional carbon structures. Various arrangements of interstitials in the clusters correspond to different metastable states of the cluster at a fixed number of carbon atoms in it. Deformation of the surface layer by interstitial clusters results in the dome-like features on the graphite surface. The typical height and lateral dimensions of the hillocks produced by 10-interstitial clusters are 1.5 Å and 10 $`÷`$ 15 Å respectively. Such features have been observed repeatedly by STM on the graphite surfaces irradiated with noble gas ions. The results presented in this paper provide an explanation to experimental observations. In order to make a closer comparison between the theory and experiment, it would be interesting to calculate STM images from the hillocks formed above interstitial clusters of different shapes in graphite. Acknowledgments The work was supported by the International Science and Technology Center (Project 467) and by the Russian State Program ”Integration”. FIGURE CAPTIONS Fig.1. Distortion of graphite lattice by two interstitials located between the nearest graphite layers at the in-plane distance $`r=`$ 4.9 Å from each other. Fig.2. Side view (a) and top view (b) of the crystallite composed of 1320 carbon atoms (six layers 220 atoms each). Closed and open circles show the atoms of odd and even (from the top to the bottom) layers respectively. Fig.3. Dynamics of 10-interstitial cluster formation in the interlayer region nearest to the graphite surface. Top view. Atoms of graphite layers are not shown. Time $`t`$ is measured in units of $`t_o=210^{14}`$ s. $`t=0`$ (a), $`t=100`$ (b), $`t=200`$ (c), $`t=300`$ (d), $`t=500`$ (e), $`t=2000`$ (f), $`t=5000`$ (g), $`t=20000`$ (h). Fig.4. Deformation of the surface layer by the 10-interstitial cluster. Fig.5. Initial (a) and final (b) arrangement of 10 carbon interstitials in the interlayer region between the surface and penultimate layers of graphite. Top view. Atoms of graphite layers are not shown. Deformation of the surface layer by the 10-interstitial cluster (c). Fig.6. Same as in Fig.4 for other initial distribution of 10 interstitials over the interlayer region. Fig.7. Same as in Fig.4 for other initial distribution of 10 interstitials over the interlayer region.
no-problem/9909/astro-ph9909474.html
ar5iv
text
# NICMOS images of JVAS/CLASS gravitational lens systems ## 1 Introduction Gravitational lens systems are important for a number of reasons. Individual lenses can be studied to determine a mass and information about the mass distribution of the lensing galaxy or cluster. Moreover, detailed studies of the lensing galaxies give clues to galaxy type and galaxy evolution (Keeton, Kochanek & Falco 1998). If the lensed object is variable, measurement of time delays between variations of the multiple images, together with a good mass model of the lensing object, allow one to estimate the Hubble constant (Kundić et al. 1997; Schechter et al. 1997; Keeton & Kochanek 1997; Biggs et al. 1998). Statistical properties of lenses are also important. Both $`q_o`$ and $`\lambda _o`$ can be constrained by lens statistics, a good example being the recent limit of $`\lambda _o<0.66`$ for flat cosmologies (Kochanek 1996). For all these reasons, studies of larger samples of gravitational lenses are important. The Jodrell Bank VLA Astrometric Survey (JVAS) and Cosmic Lens All Sky Survey (CLASS) (Myers et al. 1998), in which 10,000 sources have so far been observed with the VLA, is the most systematic radio survey so far undertaken. It aims to look at all flat-spectrum radio sources in the northern sky with flux densities $`>`$30mJy at 5 GHz. Flat-spectrum radio sources are useful in this context for two main reasons; they have intrinsically simple radio structures, making the effects of lensing easy to recognise, and they are variable, allowing time delays to be derived. The ease of recognition means that this survey is statistically very clean and should not be subject to any significant selection biases – important in view of the results to be outlined below. Here we present Hubble Space Telescope (HST) infrared imaging of four of the JVAS/CLASS lens sample. The main aim of this work is to obtain better constraints on the lensed images, in particular their flux ratios and their positions relative to the lensing galaxy. Saha & Williams (1997) describe the importance of the use of the improved image positions and ratio of time delays between different pairs of images as a rigid constraint in the modelling of lenses especially when there is a limited number of observational constraints. They note however, that lately it has become apparent (Witt & Mao 1997) that in the case of quadruple-lens systems even well-determined image positions are not enough for accurate modelling, and thus they use non-parametric models. A second scope of the work is to compare the flux ratios of the components of the lens systems in the different wavebands (optical/near-infrared/radio). Variability of the source spectrum in connection with the time delay can lead to different flux ratios in different spectral bands. Another possibility is microlensing in at least one image. The stellar mass objects can magnify only sufficiently compact sources so if the flux ratios in two spectral bands in two or more images of a multiply-imaged QSO are different, this can be a sign of microlensing. For example, in the gravitational lens system 0957+561, one suspects microlensing effects from the different flux ratios in the optical continuum light, broad-line flux and in the compact radio component. A third possibility is differential magnification, where parts of the source are more magnified than others. This can occur if the scale on which the magnification varies in the source plane is comparable with the size of the emitting regions; so the effect can be different for radiation of different wavelengths. The most spectacular effects of microlensing occur when the angular radius of the source is much smaller than that of the Einstein ring for the microlens (Refsdal & Stabell 1997) but the effect has not been considered much up to now because of the long time scales that this occurs. However, the time scale is no more important when comparing the flux ratios of the different images in multiply lensed quasars. Mao & Schneider (1998) have suggested also that as in the case of microlensing for optical fluxes, substructure in the lensing galaxy can distort the radio flux ratios and this can be another cause of difference in the flux density ratios at different wavebands. Differential reddening can be an alternative explanation of differences observed in the flux density ratios of the lensed images at different wavebands. In gravitational lens systems, especially those where the images are seen through the lens, significant reddening by dust in the lens provides a natural explanation that must be considered first. There is considerable evidence for dust reddening during passage through the lensing galaxy (Larkin et al. 1994, Lawrence, Cohen & Oke 1995; Jackson et al. 1998). Infrared pictures are less subject to censorship by dust, generally giving the best view of the lensed images, and enable the degree of extinction to be quantified by comparison of brightnesses between images at different wavelengths. Another advantage of infrared observations is that the contrast between host galaxy starlight and AGN continuum is less than in the optical. Thus, for not very luminous lens objects, we can expect to see arcs or rings formed by the extended host galaxy starlight. The lens system B1938+666 (King et al. 1998) is a particularly spectacular example of this phenomenon. Our aim is also to study the colours and light distributions of the lensing galaxies themselves. Keeton, Kochanek & Falco (1998) discuss measurements of optical HST data on a number of the lensing galaxies, including B0218+357, B0712+472 and B1600+434, and use optical observations and lens modelling to address the question of dark matter distributions. Jackson et al. (1998) have published preliminary H-band photometry of the JVAS/CLASS sample and show that all the lens systems discovered so far have lensing galaxies of normal mass-to-light ratios and that there is no evidence for dark galaxies in this sample. In section 2 we give details the HST observations. Section 3 contains brief descriptions and discussion of each individual object, and section 4 contains the conclusions. ## 2 Observations All observations presented here were obtained with the Near Infrared Camera/Multi-Object Spectrometer (NICMOS) on the HST. Observations were taken with Camera 1, which has a pixel scale of 43 mas, and taken through the F160W filter, which has a response approximately that of the standard H-band. A list of observations, with exposure times and date of observations, is given in Table 1. Fits using the jmfit program in AIPS to the point spread functions generated by the TinyTim program (Krist 1997) yield full widths at half maximum (FWHM) of 131 mas for the NICMOS observations. The complementary Wide Field & Planetary Camera 2 data presented in the Figures have FWHM of 65 mas and 79 mas for 555-nm and 814-nm images respectively. The data were processed by the standard NICMOS calibration pipeline at STScI. Multiple exposures were combined by weighted averaging or medianing depending on whether we had two or more images available. Bad pixels and cosmic rays were rejected by using a rejection algorithm. The images are first scaled by statistics of the image pixels, here the ”exposure” for intensity scaling. The rejection algorithm used is the “crreject” which rejects pixels above the average and works even with two images. The algorithm is appropriate for rejecting cosmic ray events. As a second check we also added the images with binary image arithmetic within IRAF and divided the images by a number that is appropriate according to the exposures of the images and the number of the images involved so as to preserve the same counts/sec number as the initial NICMOS images. Cosmetically the result of both methods was the same. A point spread function (PSF) was generated by the TinyTim programme (Krist 1997) and deconvolved from the images using the allstar software, available within the NOAO IRAF package. In cases where the point sources lie on top of extended emission, such as B0712+472, the PSF deconvolution was done by hand, shifting and scaling until the smoothest residual map was obtained. Photometry and errors are based on the quantity of PSF subtracted and the range of PSF subtraction which gives a smooth residual map. In Fig. 1-4 we show the raw images and the images with the point-sources subtracted. ## 3 Results and discussion The general characteristics of all the four lenses can be found in Table 2. Columns 1 and 2 present the name of the survey in which each lens system was discovered and the name of the lens, while in columns 3, 4, 5, 6 and 7, one can find the number of components in the lens system, the image separation in arcsecs (the largest separation in the case of multiple components), the lens redshift, the source redshift and the morphology of the lensing galaxy respectively. ### 3.1 B0218+357 The B0218+357 system consists of two images of a compact radio source of a redshift 0.96 (Lawrence et al. 1995) separated by 334 mas. In addition to the compact images there is a radio Einstein ring (Patnaik et al. 1993). The lensing galaxy has a redshift of 0.6847 (Browne et al. 1993). It has been detected in V, I and H-bands with the HST (Fig. 1) and, as far as can be seen given the limits imposed by the signal in the images, it has a smooth brightness distribution and is roughly circular in appearance. The colours of the galaxy are consistent with its previous classification as a spiral galaxy based on the presence of H1 absorption (Carilli, Rupen & Yanny 1993), strong molecular absorption (Wiklind & Combes 1995) and large Faraday rotations for the images. It is worth noting that the presence of a rich interstellar medium means that extinction along lines of sight through the lensing galaxy may well be important, a possibility we will discuss below. There are several anomalies associated with the optical properties of the images in the B0218+357 system which might be sufficient to doubt its classification as gravitational lens system, if it were not for the existence of an Einstein ring (Patnaik et al. 1993), a measured time delay (Corbett et al. 1995; Biggs et al. 1998) and high-resolution VLBA 15 GHz observations of B0218+357 (Patnaik et al. 1995). The observed anomalies are: * The optical and infrared flux ratios (see Table 3 and Grundahl & Hjorth 1995) for the images are very different from the well established radio ones (Patnaik et al. 1993). * The separation between the images may be less at optical and infrared wavelengths than it is at radio wavelengths (see below). * The optical/infrared colours of the two images are very different. The V and I flux densities of B are consistent with the ground-based optical spectrum (Browne et al. 1993, Stickel et al. 1996; Lawrence et al. 1995). Similarly the H flux density is what one might expect if B has a spectrum typical of a BL Lac-like object. The A/B flux density ratios are, however, unexpected. The most obvious result is that they are very different from the 3.7:1 measured in the radio (Patnaik et al., 1993; Biggs et al., 1998). This could arise from micro-lensing of the optical/infrared emission and/or from extinction. However, extinction following a normal Galactic reddening law (Howarth 1983) is not consistent with the optical/infrared colours. This is because, if we attribute everything to extinction, the necessary amount of reddening to give a 1:7 ratio in the V-band image should give about 1:2 in I, assuming an intrinsic 3.7:1 ratio of the radio images. Even if we attribute the gross difference between the radio and optical/infrared ratios to micro-lensing, the fact that the V and I flux densities of image A are the same is not consistent with the A emission being a reddened version of the image B. Furthermore, if the flux densities for image B listed in Table 3 were severely contaminated with lensing galaxy emission, this still would not help. One last possibility would be to invoke some variability in at least one image, although this would not explain the problems arising from comparison between the V and I images which were taken within an hour of each other. Hjorth (1997) has suggested that the separation between the images may be less at optical and infrared wavelengths (i.e. $``$300 mas) than it is at radio wavelengths (i.e. 334 mas). It is important to establish if this difference in image separations is real since, if true, it would immediately indicate that we are not seeing two images of the same object in the optical/infrared band. The radio image separation is well established to be 334$`\pm `$1 mas (Patnaik et al. 1993; Patnaik et al. 1995). The separations (and flux densities) given in Table 3 are the result of by-eye fitting of point spread functions to the HST pictures in such a way as to obtain smoothest residuals. The optical pictures have much lower signal-to-noise than the infrared ones, particularly for the A component and have correspondingly bigger errors. The image separation derived from fitting to the NICMOS picture is 318$`\pm `$5 mas, a value much smaller than the radio separation given above. This H-band separation is close to the one we find from the optical V and I WFPC2 images (308$`\pm `$10 mas and 311$`\pm `$10 mas in V and I respectively) but slightly larger than the optical separation found by Hjorth (1997) (296$`\pm `$10 mas and 299$`\pm `$10 mas from V and I respectively). The method we employ does not explicitly take account of the light from the lensing galaxy. Models consisting of an exponential disk plus point images have been fitted to the data by McLeod et al. (1999). We have tried the same approach. A point spread function (PSF) was generated by the TinyTim programme (Krist 1997) and was fitted simultaneously to the A and B components of the lens system. We have tried fitting and subtracting the PSF for different positions of the two components. By attributing some of the emission around the B image (about 30%) to the core of the lensing galaxy (central 0.4 $`\times `$ 0.4 arcsec) we increase the separation until no flux is left (by subtracting both the components and the lensing galaxy) and so we find the maximum best fit to give a separation of 335 mas between the two components. The lensing galaxy has then an offset of 0.54,0.34 x, y pixels from the B component or it is 27 mas away at a position angle of 260 from B. By taking into account the light contribution from the lensing galaxy the method removes the discrepancy between the radio and optical/infrared separation. A limit can be deduced for the fraction of the light from the region of the B image which is contributed by the lensing galaxy. If we examine the optical spectra obtained at a number of different epochs (e.g. Browne et al. 1993, Stickel et al. 1996; Lawrence et al. 1995) they show no significant 4000Å break nor any G-band absorption feature found in nearly all galaxy types (Bica & Alloin 1987). Of the light in the spectra $``$15% comes from the lensing galaxy. This implies that around the region of the B image, at an observed wavelength of $``$670 nm the galaxy contributes no more than $``$30% of the total flux density, allowing for slit losses and the fact that the A image makes a small contribution to the total light. Thus, if the lensing galaxy is as blue or bluer than the BL Lac-like spectrum of lensed object, the model with a 30% contribution from the galaxy at 1.6 $`\mu `$m is only just consistent with the spectral data. Hence we conclude that, though a priori one would expect the radio and optical/infrared image separations to be the same, there is some evidence to suggest that they are different. The anomalous flux density ratios may also in some way be related to the separation problem. So as mentioned above even if one takes into consideration the contribution of the light of the lensing galaxy to the B component there still is some doubt that the separation problem between the optical/infrared and radio is completely resolved. Hjorth (1997) find from their optical data that the bright images that they observed could be identified with the radio A and B components because of the excellent agreement between the position angles, which is also in agreement with Grundahl & Hjorth (1995). They attribute the shorter separation to the extendedness of component A and they argue that what we see is what is not covered by the molecular cloud (Grundahl & Hjorth 1995; Wiklind & Combes 1995). However a potential problem with this interpretation is that the bright image appears to be a point source and not extended as would be expected in this case. The alternative explanation that they offer, that A is actually the core of the lensing galaxy while the counterpart to the B image is somehow swamped in the light of the galaxy core needs also to explain the fact that since B is a BLLac as proved from both ground based spectra and its V, I and H fluxes and the main dominant source of the optical continuum we are expecting to see a radio source at the location of the lens BL Lac (Urry & Padovani 1995) which we do not. Grundahl & Hjorth (1995) actually note that it is possible (due to the small intrinsic extent of A - 1 mas at high radio frequencies, so 5 pc at the redshift of the galaxy) that the molecular cloud can cover the entire image A. If this is the case then we are left with the difficult question concerning the nature of the A emission (hereinafter we refer to the optical/infrared emission near A as A) if it is not an image of the AGN. The fact that the true A image is obscured does not mean that all the light from host galaxy of the lensed object must be hidden too; some of its emission could be lensed and give rise to A. A difficulty with this idea is that the A image is compact and of high surface brightness not extended as one might expect if it were an image of some part of the AGN host galaxy. Also there is no sign of an equivalent (B) image near B. An alternative possibility is that A is not a gravitational image at all but is part of the emission from the lensing galaxy (or even a foreground object). In this scenario one has to attribute the proximity of A to A, and the fact that the A lies at the expected position angle for a lensed image, to coincidence. In the end, we find none of the possibilities satisfactory. It could be that we are deluted about the reality of separation difference and there is nothing to explain but the different colours of the A and B images. ### 3.2 B0712+472 Radio observations revealed this system to be a four-image gravitational lens, although only the A, B and C images were visible in the optical WFPC2 images (Jackson et al. 1998). In the new NICMOS image (Fig. 2) we clearly see the D image also. We use the new data to revisit the question of the anomalous flux density ratios discussed in the earlier work. In the radio, optical and infrared bands, images A and C have the same flux density ratio (about 2.5:1) within the errors. The major discrepancy concerns the flux densities of B and D. The invisibility of D in V band and its marginal visibility in I can be ascribed to reddening in the lensing galaxy. Reddening was also initially thought to be the case for the large difference observed in the radio and optical flux density of component B (it has 80-90% of the flux of component A in the radio and only 30% of A in 555-nm). However, the fact that the B/A ratio remains constant throughout the optical and infrared within the errors while the inferred reddening at 555 nm, $`A_V`$1 mag, requires the reddening to fall to $`<0.2`$ mag at 1.6 $`\mu `$m which is clearly not the case here as seen from the NICMOS images, argues against it. Variability is now also unlikely, since the infrared and optical observations were taken over a year apart, much longer than the likely time delay between images, and yet the B/A optical/infrared flux density ratio has remained relatively constant (and very different from the radio flux density ratio). It seems therefore likely that we are, as suggested by Jackson et al. (1998), seeing an episode of microlensing. Further monitoring of this system over periods of years or tens of years (the typical timescale of microlensing in such a system) should reveal a gradual increase in the optical and infrared B/A flux density ratio. The positions of all objects are (just) consistent between the optical, infrared and radio images. The exception is an apparent shift of the position of the lensing galaxy at different wavelengths which is small but systematic with increasing wavelength. The lensed object appears to have a spectral index of $`\alpha 0`$, where the flux F(<sub>λ</sub>)$`\lambda ^\alpha `$ in the rest-frame wavelength range of 240 nm – 690 nm. This is much redder than the typical values of $`\alpha 1`$ found for quasars (e.g. Jackson & Browne 1991), implying that little quasar continuum is present. It has already been remarked that the object is severely underluminous (Jackson et al. 1998); it now appears that the continuum is unusually red for a quasar and is thus likely to have a substantial contribution from starlight, although some AGN component must be present to give the broad lines. This reinforces the conclusion that we may be dealing with an example of a mini-quasar inhabiting a host which is bright compared to the AGN component. Given the presumed magnifications (Jackson et al. 1998), the intrinsic radio luminosity of log($`L_{5\mathrm{G}\mathrm{H}\mathrm{z}}`$/W Hz<sup>-1</sup>sr<sup>-1</sup>)$``$24.4 is only just in the radio-loud category as defined by Miller, Peacock & Mead (1990). The host galaxy of the lensed object is also visible, smeared into an arc in the NICMOS picture (Fig. 2) and is considerably redder than the quasar at its centre; the arc is invisible in the optical pictures. The optical-infrared colours of the lensing galaxy are consistent with the conclusion by Fassnacht & Cohen (1998) that the lens is an early-type galaxy. The 4000Å break, at the galaxy’s redshift of 0.41 (Fassnacht & Cohen 1998) occurs between 555 nm and 814 nm. ### 3.3 B1030+074 The discovery of the lensed system B1030+074 was reported by Xanthopoulos et al. (1998). The lensed images are of a quasar/BL Lac of redshift 1.535 (Fassnacht & Cohen 1998) and are separated by 1.56 arcsec with a radio flux density ratio in the range 12 to 19. The lensing galaxy has a redshift of 0.599 and its spectrum is typical of an early type galaxy (Fassnacht & Cohen 1998). In Fig. 3 we show the WFPC2 V and I pictures (Xanthopoulos et al., 1998), together with our new NICMOS 1.6$`\mu `$ picture. In all three bands the lensing galaxy is seen near the fainter B image. The peak of the galaxy light lies on the line joining the two images but there is also a secondary emission feature to the SE of the main part of the galaxy. Xanthopoulos et al. suggest that this secondary peak may be a spiral arm. The H-band data indicate that the colour of this feature is similar to that of the rest of the lensing galaxy, possibly suggesting that it is not a spiral arm but either part of the main galaxy or a companion object. The colours of the lensed images are consistent with our knowledge that the lensed object is a quasar or BL Lac object. Of interest, however, are the optical/infrared flux density ratios of the images which are considerably higher than any of those measured at radio frequencies. It is tempting to attribute such differences to extinction but the fact that the H-band ratio is even larger than V and I ratios argues against this. We suggest that the differences are best explained as arising from microlensing. Substructure in the lensing galaxy can also distort the radio flux ratios (as in microlensing for optical fluxes), in particular for highly magnified images, without appreciably changing image positions. As Mao & Schneider (1998) note such substructure can be for example spiral arms in disk galaxies or structure caused by continuous merging and accretion of subclumps, as may possibly be the case for B1030+074, and can have a little effect on time delays and the determination of H<sub>0</sub>. ### 3.4 B1600+434 The lens system B1600+434 was discovered in the first phase of the CLASS survey (Jackson et al. 1995). It is a two-image system of separation 1392 mas; the lensing galaxy lies close to the southeastern component. In Fig. 4 we show the HST 555-nm and 814-nm images, together with the new NICMOS 1.6-$`\mu `$m image. The lensing galaxy is an edge-on spiral (Jaunsen & Hjorth 1997; Kochanek et al. 1999), which has been modelled in detail by Maller, Flores & Primack (1997) and by Koopmans et al. (1998). Photometry of this object is affected by the coincidence of the southeastern image, B, with the lensing galaxy. It is also likely that both optical images are variable (Jaunsen & Hjorth 1997). Jaunsen & Hjorth (1997) present ground-based BVRI photometry from which they deduce substantial reddening to be present in image B, almost certainly due to passage through the lensing galaxy. The infrared data lend support to this view, as the flux density ratios of the images in the infrared and radio are indistinguishable to within the errors. The unreddened image A has a spectral index $`\alpha 1.7`$, which is roughly the spectral index of a normal quasar. ## 4 Summary and conclusions Our NICMOS observations have proved successful at detecting both lensing galaxies and lensed objects. We find that often the flux density ratios of the images are different from those measured at radio wavelengths. Having infrared colours enables us to distinguish between extinction and microlensing as explanations for the different flux density ratios. We find evidence for both in different systems. Reddening is clearly affecting the B image in B1600+434 and may be part of the explanation for the puzzling system B0218+357. On the other hand, in both B0712+472 and B1030+074, the fact that the infrared and optical image flux density ratios are the same but different from the radio ones is strong evidence that microlensing is important in these objects. On a larger scale microlensing can have an important effect on lensing statistics (see Bartelmann & Schneider 1990) and the fact the we see evidence in maybe three of the four systems examined here argues that the phenomenon is not as rare as previously thought. If microlensing is considered, single highly-magnified images can occur and for example the tight correlation between total magnification and flux ratio is weakened by microlensing. The arcs seen in B0712+472 illustrate another important fact. Many of the lensed objects have relatively subluminous AGN and in some cases the infrared emission from the host galaxy starlight dominates over that from the AGN. This is when arcs are seen. Other examples are B1938+666 (King et al. 1998), B2045+265 (Fassnacht et al. 1999) and B1933+503 (Marlow et al. 1999). With an arc the lensing galaxy potential is probed at many points and provides useful additional constraints for the lens mass models. ## Acknowledgments This research was based on observations with the Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by Associated Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. This research was supported by the European Commission, TMR Programme, Research Network Contract ERBFMRXCT96-0034 “CERES”. We thank C. Kochanek for comments leading to an improvement in the first version of the paper.
no-problem/9909/astro-ph9909112.html
ar5iv
text
# Detection of neutralino annihilation photons from external galaxies ## I Introduction One of the favorite candidates for the astronomical missing mass is a neutral weakly interacting particle. Such a species is predicted in particular by supersymmetry, a theory that is actively tested at accelerators. It is conceivable therefore that most of the dark matter in the halo of the Milky Way is made of such particles. The mutual annihilation of these so–called neutralinos would yield, among a few other indirect signatures, a flux of high–energy gamma–rays. The latter has been extensively studied in the literature . It is unfortunately spoiled by the diffuse background produced by the spallation of cosmic–ray protons on the interstellar gas as discussed by Chardonnet et al.. The distribution of molecular hydrogen inside the galactic ridge is not sufficiently well known to ensure an accurate prediction of the diffuse emission. This may obliterate a reliable interpretation of any putative gamma–ray excess in terms of supersymmetric dark matter. We examine here the possibility of observing the gamma–ray signal from extra–galactic systems that contain large amounts of unseen matter. The giant elliptical galaxy M87, at the center of the Virgo Cluster, provides an excellent illustration. It is known to contain ten times more mass than our own Milky Way. Actually, X-ray observations of its ambient hot gas indicate that in the innermost 100 kpc, the mass reaches $`10^{13}\mathrm{M}_{}`$. If neutralinos pervade this galaxy, a strong gamma–ray emission should be produced in the central region. Neutralino annihilation is associated to the gamma–ray flux at the Earth $$\mathrm{\Phi }_\gamma ^{\mathrm{susy}}=\frac{1}{4\pi }\sigma vN_\gamma _{\mathrm{los}}n_\chi ^2𝑑s$$ (1) which, in the case of M87, is two orders of magnitude larger than for own galaxy. On the other hand, M87 is quite distant, about 15 Mpc away. It should therefore appear as a bright gamma–ray spot on the sky, extending for at most a few hundred square arcminutes. The detection of such a strong but localized source is well–suited for atmospheric Čerenkov telescopes (ACT) which can only monitor small portions of the sky at the same time but have very large effective collecting areas. Dwarf spheroidal (dSph) galaxies are also suspected to contain significant amounts of dark matter. They orbit round the Milky Way and are intermediate in size between the globular clusters and larger systems such as the Magellanic clouds. The typical mass of dSph’s is $`14\times 10^7\mathrm{M}_{}`$ while their radii reach up to $`12`$ kpc. They are also closer than M87. The gamma–ray flux (1) produced by annihilating species in M87 may be split into an astrophysical piece and a particle physics piece. The former term merely amounts to the integral along the line of sight of the dark matter density squared. It is approximately given by $`M^2R^5`$ where $`M`$ denotes the mass contained within the radius $`R`$. In the case of M87, that line of sight integral is $`10\mathrm{M}_{}^{}{}_{}{}^{2}`$ pc<sup>-5</sup> whereas it is $`0.011\mathrm{M}_{}^{}{}_{}{}^{2}`$ pc<sup>-5</sup> for dwarf spheroidals. This compares to the Milky Way value of $`0.1\mathrm{M}_{}^{}{}_{}{}^{2}`$ pc<sup>-5</sup> which corresponds to the inner 100 kpc. These rough estimates are improved in Sec. II where the distribution of matter inside M87 and the dSph systems is modeled together with the radial profile of the neutralino induced gamma–ray emission. The various backgrounds to the latter are discussed in Sec. III where a fiducial example is presented. In particular, the signal–to–noise ratio is derived as a function of the angular distance to the centers of the sources under scrutiny. The supersymmetric model is presented in Sec. IV. We then delineate in Sec. V the domain of neutralino masses and gamma–ray production cross sections which the next generation of atmospheric Čerenkov telescopes will explore. Both continuum and monochromatic channels are featured. We finally draw some conclusions in Sec. VI where we pay special attention to a possible clumpy structure of the neutralino distribution. ## II The M87 and dSph Systems The observed X-ray emission produced by clusters of galaxies such as Virgo results from the thermal bremsstrahlung that takes place in the hot diffuse intracluster gas (see for a complete review). The temperature profile $`T(r)`$ of the gas can be inferred from the spectrum of the X–ray radiation. The electron number density profile $`n_e(r)`$ can be obtained from the X–ray surface brightness $$\mathrm{\Sigma }(r)=n_e(\sqrt{r^2+s^2})L_X𝑑s,$$ (2) where $`L_X`$ denotes the X–ray luminosity of the plasma. The properties of the hot gas in the vicinity of M87 have been determined by Tsai . The X–ray data are well fitted by $$n_e(r)=n_o\frac{(r/a_1)^{\alpha _1}}{1+(r/a_1)^{\alpha _3}}\text{and}T(r)=T_{\mathrm{}}\left(\frac{r}{a_2+r}\right)^{\alpha _2},$$ (3) with $`n_0=4.31\times 10^2\text{cm}^3`$, $`a_1=6.63`$ kpc, $`a_2=4.58\times 10^5`$ kpc, $`\alpha _1=0.49`$, $`\alpha _2=0.114`$, $`\alpha _3=0.869`$ and $`T_{\mathrm{}}=8.35\times 10^7`$ K. Assuming that the intracluster gas is in hydrostatic equilibrium, the total mass profile is given by $$M(r)=\frac{kTr}{G\mu m_p}\left(\frac{d\mathrm{log}n_e}{d\mathrm{log}r}+\frac{d\mathrm{log}T}{d\mathrm{log}r}\right),$$ (4) where $`\mu m_p`$ is the average mass of a particle in the gas. The total mass inside M87 reaches $`8.4\times 10^{11}\mathrm{M}_{}`$ at 10 kpc and $`1.4\times 10^{13}\mathrm{M}_{}`$ at 100 kpc. The contribution from the stars alone was obtained from B-band surface brightness profiles (see and references therein). The dark matter component distribution is derived once the stars and the gas have been removed. Mappings of the X–ray emission in the Virgo cluster are consistent with the hypothesis that this system and its central galaxy contain non–baryonic dark matter. The ratio of the dark matter density to the baryonic matter density is also borrowed from $$\frac{\rho _{\mathrm{DM}}}{\rho _\mathrm{B}}=\mathrm{\hspace{0.33em}0.15}\left(\frac{r}{1\mathrm{kpc}}\right)^{1.5}.$$ (5) The integral along the line of sight of $`\rho _{\mathrm{DM}}^2`$ is readily obtained. Dwarf spheroidal (dSph) galaxies also seem to contain large amounts of unseen material . Like globular clusters, dSph’s have low luminosities of order $`10^510^7\mathrm{L}_{}`$ within an ill-defined radius $`r_t`$ of a few kpc. Their structure also follows a King profile (see for a review). If these systems are in virial equilibrium, a velocity dispersion $`\sigma `$ of a few km/s translates into a mass $$M_{\mathrm{dSph}}2.3\times 10^7\mathrm{M}_{}\left(\frac{\sigma }{10\mathrm{km}/\mathrm{s}}\right)^2\left(\frac{1\mathrm{kpc}}{r_t}\right),$$ (6) hence these systems have mass–to–light ratios that may reach $``$ 200. We have been interested here in four galaxies with exceptionally high M/L–values and galactocentric distances less than 100 kpc. The dSph’s featured in table I are Carina, Draco, Ursa Minor and Sextans. We have modeled their mass distributions with a one–component King profile for which the phase space density is given by $$f(E)\mathrm{exp}\left(\frac{\varphi _tE}{\sigma ^2}\right)\mathrm{\hspace{0.33em}1}.$$ (7) The stellar or particle energy per unit mass is denoted by $`E=\varphi +v^2/2`$ while $`\varphi _t=\varphi (r_t)`$ is the potential at the tidal radius $`r_t`$ of the dSph. Beyond that boundary, stars have enough energy to escape from the gravitational field of the system and are captured by the tidal field of the nearby Milky Way. The mass distribution ensues from the phase space density (7) $$\rho (r)\frac{\sqrt{\pi }}{4}\mathrm{exp}(u)erf(\sqrt{u})\frac{\sqrt{u}}{2}\frac{u^{3/2}}{3},$$ (8) where $`u`$ denotes the ratio $`\left\{\varphi _t\varphi (r)\right\}/\sigma ^2`$ while erf is the error function. One component King models are completely determined once the velocity dispersion $`\sigma `$, the central density $`\rho _c`$ and the ratio $`\chi =\left\{\varphi _t\varphi (0)\right\}/\sigma ^2`$ are specified. Any combination of these quantities suffices. The concentrations of the four dSph’s under scrutiny are given in table I as determined by . They are related to the tidal and core radii through $$c=\mathrm{log}_{10}\left(\frac{r_t}{r_c}\right),$$ (9) where the core radius $`r_c`$ is defined as $$r_c=\frac{3\sigma }{\sqrt{4\pi G\rho _c}}.$$ (10) Values of the concentration of 0.5 and 1 respectively translate into the ratios $`\chi =1.97`$ and 4.85. Fixing the concentration allows for the determination of the density profile $`\rho (r)/\rho _c`$ as a function of the reduced radius $`r/r_c`$. The second input to our calculations is the dSph mass as given by the central values of table I. We finally set the average dSph density $`\overline{\rho }_{\mathrm{dSph}}`$ by requiring that the proper gravitational field of these galaxies is compensated, at their boundaries, by the tidal field of the Milky Way taken at perigalacticon. Assuming a logarithmic galactic potential implies that $$\overline{\rho }_{\mathrm{dSph}}=f(e)\overline{\rho }_\mathrm{G},$$ (11) where the function $`f(e)`$ depends on the orbital eccentricity of the satellite galaxy $$f(e)=\frac{1+\left[\left(1+e\right)^2/2e\right]\mathrm{ln}\left[\left(1+e\right)/\left(1e\right)\right]}{\left(1e\right)^2}.$$ (12) The average galactic density $`\overline{\rho }_\mathrm{G}`$ is understood within the sphere centered on the Milky Way, with radius equal to the semimajor axis $`a`$ of the dSph orbit. The corresponding mass is given by $$M_\mathrm{G}=\mathrm{\hspace{0.33em}1.1}\times 10^{10}\mathrm{M}_{}\times a\left[\text{kpc}\right].$$ (13) We set the semimajor axis $`a`$ equal to the dSph galactocentric distance $`d`$. Eccentricities in the range between 0 and 1/2 are generally assumed. Here, their values were derived by requiring that the resulting velocity dispersions $`\sigma _{\mathrm{th}}`$ should be close to the observations – see table I. In the case of Carina and Draco, the eccentricity is $`e=0`$ while for Ursa Minor, a value of $`e=1/2`$ is appropriate. As regards Sextans, we simply imposed the equality between the dSph and galactic average densities, i.e., $`\overline{\rho }_{\mathrm{dSph}}=\overline{\rho }_\mathrm{G}`$, in order to get a velocity dispersion of 8.8 km/s, not too far from the measured value of $`6.2\pm 0.8`$ km/s. Assuming relation (11) and a circular orbit would have lead to $`\sigma _{\mathrm{th}}=\mathrm{\hspace{0.17em}9.9}`$ km/s. In modeling the dSph galaxies, we have adopted the point of view that these satellites contain large amounts of dark matter as implied by their high velocity dispersions. In the alternative explanation, the latter are the result of an ongoing tidal stripping. However, Oh et al. showed that unbound but not yet dispersed systems have velocity dispersions fairly close to the values derived from the virial equilibrium. The large M/L–values of the four dSph systems at stake do not depend therefore on whether these galaxies are in equilibrium or are currently being torn apart. Notice finally that most of the putative neutralino–induced gamma–ray signal emitted by these dSph’s is produced in the inner 10 arcmin as shown in the right panel of Fig. 1. It weakly depends on the actual boundaries of these systems. ## III The Gamma-ray Signal and its Backgrounds The gamma–ray flux produced by a halo of dark matter particles may be computed as follows. First, for each set of supersymmetric parameters, we derived the thermally averaged annihilation cross section $`\sigma v_A`$ into some channel $`A`$. Then, we used the Lund Monte Carlo to compute the photon spectrum per annihilation from hadronization or decay processes, and we integrated over some specified threshold which depends on the atmospheric Čerenkov telescopes in question. Finally, we have summed over all channels to find the total photon spectrum from a given supersymmetric model. The volume production rate of gamma–rays is simply given by $`n_\chi ^2\sigma vN_\gamma `$, where $`n_\chi `$ is the number density of annihilating particles and $`N_\gamma `$ is the mean number of photons above the threshold. The corresponding flux at the Earth depends on the integral, along the line of sight, of the dark matter density squared $$\mathrm{\Phi }_\gamma ^{\mathrm{susy}}=\frac{1}{4\pi }\frac{\sigma vN_\gamma }{m_\chi ^2}_{\mathrm{los}}\rho _\chi ^2𝑑s,$$ (14) as well as on the gamma–ray production cross section $`\sigma vN_\gamma `$ and the mass $`m_\chi `$ of the species. Atmospheric Čerenkov telescopes (ACT) offer the most promising method to detect gamma–rays with energies above 100 GeV. When passing through the atmosphere, incoming high–energy photons emit a Čerenkov radiation that is detected and analyzed to infer the energy and arrival direction of the primary gamma–ray. The performances of ACTs are determined by the surface of the optical reflectors used to collect the Čerenkov light, and the ability to reject the cosmic–ray induced background. A thorough analysis of these performances can be found in Aharonian . We will simply model them by the effective collecting area $`A_{\mathrm{eff}}`$, a feature of the instrument that takes into account in particular the actual surface of detection together with the energy dependent detection efficiency. The surface brightness $`\mu _\gamma ^{\mathrm{susy}}`$ of the source as seen by an ACT depends on the collecting area of the instrument and on the integration time $`T`$ $$\mu _\gamma ^{\mathrm{susy}}=\mathrm{\Phi }_\gamma ^{\mathrm{susy}}A_{\mathrm{eff}}T.$$ (15) We plot the surface brightness in Fig. 1 for a collecting area $`A_{\mathrm{eff}}`$ = 1 km<sup>2</sup> and for an entire year of effective observation. In the fiducial model taken here, the neutralino mass is $`m_\chi =1`$ TeV while the annihilation cross section has been set equal to $`\sigma vN_\gamma =\mathrm{\hspace{0.17em}10}^{25}`$ cm<sup>3</sup> s<sup>-1</sup>. The surface brightness corresponds to the number of photons above a threshold of 100 GeV that are produced per square arcmin of the source. In the left panel, the giant galaxy M87 is featured whereas in the right panel, the four dSph satellites discussed in the previous section, i.e., Carina, Draco, Sextans and Ursa Minor are presented. The signal declines away from the centers of these systems. It follows the integral along the line of sight of the neutralino density squared – remember that $`n_\chi =\rho _\chi /m_\chi `$. Actually, we must take into account the finite angular resolution of the instrument. When the telescope is pointed towards the center of the object, photons coming from directions within the resolution cone also reach the detector. Thus, the relevant quantity to examine is the integral of the surface brightness $`\mu _\gamma ^{\mathrm{susy}}`$ over a disk centered on the source, with angular radius $`\theta `$ $$N_\mathrm{s}=_0^\theta \mathrm{\hspace{0.17em}2}\pi \alpha \mu _\gamma ^{\mathrm{susy}}𝑑\alpha .$$ (16) When the angular resolution of the ACT is good enough, the size of that circular region is generally specified by the requirement that the signal–to–noise ratio should be optimal. The number of neutralino–induced source photons $`N_\mathrm{s}`$ should actually be compared to the background $`N_{\text{bg}}`$, the origins of which are various. Would it be constant, this background could easily be removed from the signal. Actually, it follows Poisson statistics, and as such, it exhibits fluctuations of amplitude $`\sqrt{N_{\text{bg}}}`$. These fluctuations have a smaller chance of being interpreted as a signal when the significance $$S=\frac{N_\mathrm{s}}{\sqrt{N_{\text{bg}}}}$$ (17) is large. There are two main types of background events. First, an experimental background is due to hadronic and electronic cosmic–rays that impinge on the top of the atmosphere. The induced showers can be misinterpreted as gamma–ray events. Electrons make up the largest source of background insofar as their showers are of the electromagnetic type and cannot be disentangled from those generated by the impact of high–energy photons. The corresponding flux steeply decreases at high energy $$\mathrm{\Phi }_\mathrm{e}=\left(6.4\times 10^2\mathrm{GeV}^1\mathrm{cm}^2\mathrm{s}^1\mathrm{sr}^1\right)\left(\frac{E}{1\mathrm{GeV}}\right)^{3.3\pm 0.2},$$ (18) and leads to the background $$\frac{dN_\text{e}}{d\mathrm{\Omega }}(E>E_0)=\left(1.9\times 10^4\mathrm{km}^2\mathrm{yr}^1\mathrm{arcmin}^2\right)\left(\frac{E_0}{\text{100 GeV}}\right)^{2.3}.$$ (19) In the left panel of Fig. 1, that electron induced background is featured by the dotted line (a). It is more than an order of magnitude larger than the neutralino–induced signal at maximum. It is also noticeably flat all over the source. Observations performed between 50 GeV and 2 TeV yield a hadron flux $$\mathrm{\Phi }_{\mathrm{had}}=\left(1.8\mathrm{GeV}^1\mathrm{cm}^2\mathrm{s}^1\mathrm{sr}^1\right)\left(\frac{E}{1\mathrm{GeV}}\right)^{2.75}.$$ (20) Hadron–induced showers are more extended on the ground than those of the electromagnetic type. Stereoscopy is a powerful tool to discriminate hadrons from electrons and gamma–rays. The CAT experiment, for instance, has already achieved a rejection factor of one misidentified event over a sample of 600 showers generated by cosmic–ray hadrons . Here, we have assumed an even better rejection factor, with only one misidentified hadron out of a thousand showers. This yields a hadron background $$\frac{dN_{\text{had}}}{d\mathrm{\Omega }}(E>E_0)=\left(8.7\times 10^3\mathrm{km}^2\mathrm{yr}^1\mathrm{arcmin}^2\right)\left(\frac{E_0}{\text{100 GeV}}\right)^{1.75}.$$ (21) which corresponds to the dotted line (b) of Fig. 1. Once again, that background is flat over M87. Next come the astrophysical sources of background. To commence, Sreekumar et al. have measured an extragalactic component in the gamma–ray diffuse emission with the EGRET instrument on board the Compton gamma–ray observatory $$\mathrm{\Phi }_{\mathrm{eg}}=\left(7.32\pm 0.34\times 10^9\mathrm{MeV}^1\mathrm{cm}^2\mathrm{s}^1\mathrm{sr}^1\right)\left(\frac{E}{451\mathrm{MeV}}\right)^{2.10\pm 0.03}.$$ (22) This translates into the background (c) $$\frac{dN_{\text{eg}}}{d\mathrm{\Omega }}(E>E_0)=\left(2.1\times 10^2\mathrm{km}^2\mathrm{yr}^1\mathrm{arcmin}^2\right)\left(\frac{E_0}{\text{100 GeV}}\right)^{1.1}.$$ (23) In our fiducial example, the neutralino–induced signal exceeds the extragalactic background within $``$ 1 arcmin from the center of M87. Then, we have modeled the gamma–ray diffuse emission from that giant elliptical galaxy itself. Local cosmic–rays interact with the cluster gas to produce high–energy photons that may potentially contaminate the signal. In the inner 10 kpc, the magnetic field of M87 is comparable to that of the Milky Way, with a magnitude $``$ 1 $`\mu `$G. It falls by a factor of ten outwards in the Virgo cluster, at a distance of $``$ 100 kpc. Assuming that equipartition of energy holds between this magnetic field and the cosmic–rays that pervade M87 – just like in our own galaxy – we infer a gamma–ray emissivity of $$I_\mathrm{H}(E)=\left(2\times 10^{35}\mathrm{GeV}^1\mathrm{s}^1\mathrm{sr}^1\right)\left(\frac{E}{1\mathrm{TeV}}\right)^{2.73},$$ (24) per hydrogen atom illuminated by high–energy protons. Once multiplied by the hydrogen column density across M87, it yields the in situ diffuse gamma–ray flux. We have assumed that hydrogen is the main constituent of the gas at the center of the Virgo cluster so that we have integrated the electron density, as derived in Sec. II, along the appropriate lines of sight. The gamma–ray emissivity as given by relation (24) is typical of the inner 10 kpc inside M87. It has been rescaled by the factor $`\left\{1+\left(r/10\mathrm{kpc}\right)^2\right\}^1`$ to account for the outward decrease of both the magnetic energy and cosmic–ray flux. The M87 gamma–ray diffuse background is presented as the dashed line (d) of Fig. 1. It falls outwards as a result of the combined decrease of the hydrogen column density and of the cosmic–ray flux. Finally, we have taken into account the Milky Way diffuse emission. We use hydrogen column densities inferred from the dust maps of . A hydrogen column density of $`1.69\times 10^{20}`$ H cm<sup>-2</sup> in the direction of M87 corresponds to $$\frac{dN_{\text{MW}}}{d\mathrm{\Omega }}(E>E_0)=\left(2.8\mathrm{km}^2\mathrm{yr}^1\mathrm{arcmin}^2\right)\left(\frac{E_0}{\text{100 GeV}}\right)^{1.73},$$ (25) and to the dashed line (e). The Milky Way component is the weakest source of background. Electrons and in a lesser extent hadrons – as long as rejection is efficient – dominate. In the right panel of Fig. 1, the total background is presented. It encompasses the various sources discussed above except a local diffuse emission. The dSph galaxies contain old stars and the acceleration mechanisms at work in the Milky Way are presumably less prevalent there. Even in the case of the brightest source, Ursa Minor, the signal is still three orders of magnitude below the background – see curve (g). Because Carina lies in the sky towards the galactic ridge, the hydrogen column density of the Milky Way is quite large, reaching a value of $`8.7\times 10^{23}`$ H cm<sup>-2</sup>. The gamma–ray diffuse emission of our own galaxy dominates the background (f) in that specific case. The significance $`S`$ is presented in Fig. 2 for M87 (left panel) and in the case of Ursa Minor, the best dSph source (right panel). Should the signal be flat, both $`N_\mathrm{s}`$ and $`N_{\text{bg}}`$ would be proportional to the surface of the sky monitored by the ACT. As a result, the signal–to–noise ratio would just increase like the angular radius $`\theta `$ of the observed region. In Fig. 2, it actually increases as the beam size opens up. Because the signal is not flat but weakens far from the source, the signal–to–noise ratio reaches a maximum and drops at larger radii. The ACT acceptance has been set equal to 0.01 km<sup>2</sup> yr. The neutralino mass is still 1 TeV whereas the gamma–ray production cross section $`\sigma v_{\mathrm{cont}.}N_\gamma =\mathrm{\hspace{0.17em}10}^{25}`$ cm<sup>3</sup> s<sup>-1</sup> is the same for the three values of the threshold energy featured here. As the background is flat over the source, the signal–to–noise ratio always peaks at the same angular position. For M87, the significance is the largest for an angular aperture of $``$ 1.4 arcmin. Because Ursa Minor is closer, the optimal beam size becomes 7.7 arcmin. As both the electronic and hadronic backgrounds weaken with the gamma–ray energy, the magnitude of the maximum increases with the threshold as is clear in both panels. For M87 and a 50 GeV threshold, the significance reaches a value of $``$ 0.4 whereas for 250 GeV, the peak value increases up to 2. In the case of Ursa Minor, the variations of the significance are just the same. Should the acceptance be 1 km<sup>2</sup> yr, these signal–to–noise ratios would be rescaled up by an order of magnitude. ## IV Exploring the SUSY Parameter Space We have explored the Minimal Supersymmetric Standard Model (MSSM). This framework has many free parameters, but with reasonable assumptions the set of parameters is reduced to seven: the Higgsino mass parameter $`\mu `$, the gaugino mass parameter $`M_2`$, the ratio of the Higgs vacuum expectation values $`\mathrm{tan}\beta `$, the mass of the $`CP`$–odd Higgs boson $`m_A`$, the scalar mass parameter $`m_0`$ and the trilinear soft SUSY–breaking parameters $`A_b`$ and $`A_t`$ for third generation squarks. The only constraint from supergravity that we imposed is gaugino mass unification, though the relaxation of this constraint would not significantly alter our results. For a more detailed description of these models, see Refs. . The lightest stable supersymmetric particle in most models is the lightest of the neutralinos, which are superpositions of the superpartners of the gauge and Higgs bosons. We used the one–loop corrections for the neutralino and chargino masses given in Ref. . For the Higgs bosons we used the leading log two–loop radiative corrections, calculated within the effective potential approach given in Ref. . We made thorough random scans of the model parameter space, with overall limits of the seven MSSM parameters as given in Table II. Each model was examined to see if it is excluded by the most recent accelerator constraints. The most important of these are the LEP bounds on the lightest chargino mass $$m_{\chi _1^+}>\{\begin{array}{ccc}91\mathrm{GeV}\hfill & ,& |m_{\chi _1^+}m_{\chi _1^0}|>4\mathrm{GeV}\hfill \\ 85\mathrm{GeV}\hfill & ,& \mathrm{otherwise}\hfill \end{array}$$ (26) and on the lightest Higgs boson mass $`m_{H_2^0}`$ (which ranges from 72.2–88.0 GeV depending on $`\mathrm{sin}(\beta \alpha )`$ with $`\alpha `$ being the Higgs mixing angle) and the constraints from $`bs\gamma `$ . For each allowed model, the relic density of neutralinos $`\mathrm{\Omega }_\chi h^2`$ was calculated, where $`\mathrm{\Omega }_\chi `$ is the density in units of the critical density and $`h`$ is the present Hubble constant in units of $`100`$ km s<sup>-1</sup> Mpc<sup>-1</sup>. We used the formalism of Ref. for resonant annihilations, threshold effects, and finite widths of unstable particles and we included all two–body tree–level annihilation channels of neutralinos. We also included the so–called coannihilation processes between all neutralinos and charginos in the relic density calculation according to the analysis of Edsjö and Gondolo . Present observations favor $`h=0.6\pm 0.1`$, and a total matter density $`\mathrm{\Omega }_M=0.3\pm 0.1`$, of which baryons may contribute 0.02 to 0.08 . However, we only required that $`\mathrm{\Omega }_\chi h^20.5`$. We were also interested in models where neutralinos are not the only component of dark matter. In models with $`\mathrm{\Omega }_\chi h^20.025`$, we rescaled the relevant halo densities by a factor $`\mathrm{\Omega }_\chi h^2/0.025`$ to account for the fact that a supplemental source of dark matter is required in such models. ## V Flux of Gamma–rays from Neutralino Annihilation We now present the results of the scans over the supersymmetric parameter space. We first made scatter plots of the continuum gamma–ray production rate $`\sigma v_{\mathrm{cont}.}N_\gamma `$ versus the neutralino mass $`m_\chi `$, for three gamma–ray thresholds $`E_{\mathrm{th}}`$ = 50, 100 and 250 GeV. The results are presented in Fig. 3, along with the 3–$`\sigma `$ detection limit of the signal from M87, for a typical exposure of $`0.01`$ km<sup>2</sup> yr. This acceptance corresponds to the next generation of ACTs. The HESS project for instance should reach a threshold of 40 GeV for a collecting area of 300 m by 300 m. Its angular resolution should be 0.1. We have assumed a generous 0.1 yr integration time which would correspond to a few months of continuous observation. This compares to the VERITAS project which should be sensitive to the energy range extending from 50 GeV up to 50 TeV. A collecting area of 10,000 m<sup>2</sup> is expected at 100 GeV, increasing by an order of magnitude for TeV photons. VERITAS should reach an angular resolution of 5 arcmin at 100 GeV . As shown in the left panel of Fig. 2, the optimal beam size is 1.4 arcmin in the direction of M87. For a 6 arcmin angular resolution and a 50 GeV threshold, the significance drops down to $`S=0.2`$. A 3–$`\sigma `$ detection level translates therefore into the gamma–ray production cross section $`\sigma v_{\mathrm{cont}.}N_\gamma =\mathrm{\hspace{0.17em}1.5}\times 10^{24}`$ cm<sup>3</sup> s<sup>-1</sup> for a 1 TeV neutralino. The heavy solid line in the upper–left panel of Fig. 3 corresponds to a sensitivity level of $$\sigma v_{\mathrm{cont}.}N_\gamma 1.5\times 10^{26}\mathrm{cm}^3\mathrm{s}^1\left(\frac{m_\chi }{100\mathrm{GeV}}\right)^2.$$ (27) The region below that line will not be accessible, even with the next generation of Čerenkov telescopes. When the threshold increases to 100 and 250 GeV, the sensitivity respectively reaches down a level of $`7.2\times 10^{27}`$ and $`2.8\times 10^{27}`$ cm<sup>3</sup> s<sup>-1</sup>. Notice that even for a 250 GeV threshold, the number of background photons collected within 6 arcmin from the center of M87 amounts to $``$ 4,600 particles. A 3–$`\sigma `$ signal corresponds to $``$ 200 additional gamma–rays from that hot spot. For a 50 GeV threshold, $`\sqrt{N_{\text{bg}}}1.4\times 10^5`$ photons whereas $`N_\mathrm{s}1,100`$ photons. In Fig. 4, we present the results for the gamma–ray lines. We considered the two processes $`\chi \chi \gamma \gamma `$ and $`\chi \chi Z^0\gamma `$. The photon energy in the $`2\gamma `$ process is clearly $`E_\gamma =m_\chi `$, while in the $`Z^0\gamma `$ reaction the photon energy is $$E_\gamma =m_\chi \frac{m_Z^2}{4m_\chi }.$$ (28) As in Fig. 3, we also present detection limits. The left panel features the two gamma–ray line whereas the right panel presents the $`Z^0\gamma `$ process. The sensitivity limits are identical for both lines at high energy, i.e., for massive neutralinos. In the case of the $`Z^0\gamma `$ line, the photon energy becomes vanishingly small as the neutralino mass $`m_\chi `$ tends to $`m_Z/2`$. Because the signal becomes swamped inside a very strong low–energy gamma–ray background, the sensitivity drops completely and the detection limit of the right panel exhibits a sharp increase around $`m_Z/2`$. Below that threshold, the $`Z^0\gamma `$ process is no longer kinematically allowed. There is an additional restriction arising from the energy threshold $`E_{\mathrm{th}}`$ of the Čerenkov telescope itself. Requiring that $`E_\gamma >E_{\mathrm{th}}`$ implies that the neutralino mass should exceed $$m_\chi \frac{E_{\mathrm{th}}+\sqrt{m_Z^2+E_{\mathrm{th}}^2}}{2}.$$ (29) For a 50 GeV threshold, this translates into $`m_\chi `$ 77 GeV. We find that the lines are much more difficult to detect. The sensitivities presented here correspond to an exposure of 10 km<sup>2</sup> yr and an energy resolution $`\mathrm{\Delta }E/E`$ = 0.02, both of which are unreasonable with today’s detectors. ## VI Discussion and Conclusions As is clear from Fig. 3 and 4, the supersymmetric parameter space will mostly remain below the sensitivity level of the next generation of instruments. A few configurations are potentially detectable provided that the ACT threshold is decreased down to 50 GeV. If neutralinos are clumped inside the dark matter halo of M87, the situation considerably improves. As the gamma–ray production rate goes as the square of the density, neutralinos annihilate more efficiently as they condense. That overall effect may be described by the clumpiness factor $`𝒞`$, which is defined as the increase of the global annihilation rate that results from a possible clumpy structure of the dark matter distribution. Neutralinos are cold dark matter species. As such, they exhibit density fluctuations $$\left(\frac{\delta \rho }{\rho }\right)^2\left|\delta _k\right|^2k^3,$$ (30) that are related to the comoving wave vector $`k`$ through $$\left|\delta _k\right|^2P(k)=\frac{Ak}{(1+\alpha k+\beta k^{1.5}+\gamma k^2)^2}.$$ (31) with $`\alpha =1.71\times l`$, $`\beta =9\times l^{1.5}`$ and $`\gamma =l^2`$ where $`l=(\mathrm{\Omega }h^2)^1`$. Normalization to $`\sigma _8=0.8`$ gives $`A=2.82\times 10^6\text{Mpc}^4`$ when $`\mathrm{\Omega }=1`$ and $`h=0.5`$. In a restricted wavelength range, it is approximated by a power law $$P(k)k^n.$$ (32) The power spectrum $`P(k)`$ of density fluctuations behaves as $`k^3`$ on small scales, i.e., for structures typically lighter than $`10^8`$ $`M_{}`$. As regards a possible clumpy structure of the halo around M87, the relevant mass range extends from $`M_\mathrm{i}10^8`$ $`M_{}`$ up to $`M_\mathrm{s}10^{13}`$ $`M_{}`$. The corresponding spectral index $`n`$ goes from $`2.6`$ to $`2.1`$. As shown below, structures smaller than $`M_\mathrm{i}`$ turn out to all have the same density. They contribute identically to the clumpiness factor $`𝒞`$. There is no larger structure than the halo itself whose mass $`M_\mathrm{s}`$ reaches $`10^{13}`$ $`M_{}`$ in the inner 100 kpc. Because the comoving wave vector $`k`$ scales as $`M^{1/3}`$, neutralino density fluctuations depend on both the scale $`M`$ and the redshift $`z`$ as $$\frac{\delta \rho }{\rho }\left(1+z\right)^1M^{(n+3)/6}.$$ (33) The redshift factor $`\left(1+z\right)^1`$ is typical of the $`t^{2/3}`$ growth of density fluctuations in a flat matter–dominated universe. Notice that small scale perturbations, for which $`n=3`$, all become non–linear at the same time. Their subsequent collapse leads to virialized structures whose densities have been enhanced by a factor of $``$ 180 with respect the epoch of formation, when $`\delta \rho /\rho `$ reached unity. Small scale dark matter clumps all have therefore the same density today. The formation redshift of larger structures behaves as $$\left(1+z_\mathrm{F}\right)M^{(n+3)/6},$$ (34) so that today, neutralino clumps with mass above $``$ $`10^8`$ $`M_{}`$ have a density $$\rho (M)180\left(1+z_\mathrm{F}\right)^3M^{(n+3)/2}.$$ (35) The density $`\rho (M_\mathrm{s})`$ of the largest possible clump should be comparable to the average dark matter density $`\rho _{\mathrm{DM}}`$ in the halo around M87. The distribution of clumps should follow the Press–Schechter’s law $$\frac{dN}{dM}=\frac{M_0}{M^2}.$$ (36) The normalization mass $`M_0`$ obtains from the requirement that the clumps make up a fraction $`f`$ of the halo. Disregarding for the moment clumps with mass less than $`10^8`$ $`M_{}`$, we get $$M_0=\frac{fM_\mathrm{s}}{\mathrm{ln}\left(M_\mathrm{s}/M_\mathrm{i}\right)}.$$ (37) Some clumps are actually destroyed through the tidal stripping resulting from both their mutual interactions and the action of the gravitational field of M87. Just like globular clusters orbiting the Milky Way, they evaporate so that a fraction $`f`$ only of the initial population is expected to survive. Inside a clump with mass $`M`$, the annihilation rate of neutralinos is $`\sigma v\left\{\rho (M)/m_\chi \right\}^2`$ per unit volume. A net number $`\sigma v\rho (M)M/m_\chi ^2`$ of annihilations take place in the clump per unit time. We infer that the total annihilation rate of clumped neutralinos is obtained from the convolution $$\mathrm{\Gamma }_{\mathrm{clump}}=_{M_\mathrm{i}}^{M_\mathrm{s}}\frac{\sigma v}{m_\chi ^2}\rho (M)M\frac{dN}{dM}𝑑M.$$ (38) Taking into account the mass–density relation (35) as well as the mass distribution (36) of the clumps, we readily infer the rate $$\mathrm{\Gamma }_{\mathrm{clump}}=\frac{\sigma v}{m_\chi ^2}\rho _{\mathrm{DM}}M_0_{M_\mathrm{i}}^{M_\mathrm{s}}\left(\frac{M}{M_\mathrm{s}}\right)^{(n+3)/2}\frac{dM}{M}.$$ (39) This may be compared to what a homogeneous distribution would yield $$\mathrm{\Gamma }_{\mathrm{hom}}=\frac{\sigma v}{m_\chi ^2}\rho _{\mathrm{DM}}M_\mathrm{s}.$$ (40) The clumpiness factor $`𝒞`$ may be understood as the enhancement ratio $`\mathrm{\Gamma }_{\mathrm{clump}}/\mathrm{\Gamma }_{\mathrm{hom}}`$. Clumps span less space than if their matter was homogeneously distributed. In their interiors, neutralinos nevertheless annihilate much more efficiently. The net effect is the increase $$𝒞=\left(\frac{2}{n+3}\right)\left(\frac{f}{\mathrm{ln}\left(M_\mathrm{s}/M_\mathrm{i}\right)}\right)\left[\left(\frac{M_\mathrm{s}}{M_\mathrm{i}}\right)^{(n+3)/2}\mathrm{\hspace{0.17em}1}\right].$$ (41) This expression gives $`𝒞34f`$ for $`n=2.1`$ and $`𝒞4f`$ for $`n=2.6`$. With a general power spectrum $`P(k)`$, this expression reads $$𝒞=\left(\frac{f}{\mathrm{ln}\left(M_\mathrm{s}/M_\mathrm{i}\right)}\right)_{M_i}^{M_s}\left(\frac{M}{M_s}\right)^{3/2}\left(\frac{P(M)}{P(M_s)}\right)^{3/2}\frac{dM}{M}$$ (42) where we go from $`P(k)`$ to $`P(M)`$ through $$k=\left(\frac{M}{\mathrm{M}_{}}\frac{G}{3H_0^2\pi ^2}\right)^{1/3}$$ (43) The CDM power spectrum leads to $`𝒞13f`$. If we now assume that most of the clumps are small and that their mass does not exceed $`M_\mathrm{i}=10^8`$ $`M_{}`$, we find $$\mathrm{\Gamma }_{\mathrm{clump}}=\frac{\sigma v}{m_\chi ^2}\rho (M_\mathrm{i})M𝑑N.$$ (44) If that population of light clumps accounts for a fraction $`f`$ of the dark matter halo around M87, the previous relation translates into $$\mathrm{\Gamma }_{\mathrm{clump}}=\frac{\sigma v}{m_\chi ^2}\rho (M_\mathrm{i})fM_\mathrm{s}.$$ (45) The clumpiness factor becomes $$𝒞=f\frac{\rho (M_i)}{\rho (M_s)}=f\left(\frac{M_\mathrm{s}}{M_\mathrm{i}}\right)^{(n+3)/2},$$ (46) in the case of a power-law spectrum or more generally $$𝒞=f\left(\frac{M_\mathrm{s}}{M_\mathrm{i}}\right)^{3/2}\left(\frac{P(M_i)}{P(M_s)}\right)^{3/2},$$ (47) It reaches a value of $`𝒞40f`$ for the CDM power spectrum. Varying the fraction $`f`$ between 0.1 and 1, we conclude that depending on the typical size of the clumps, the gamma–ray production rate may be enhanced by factors as large as 40. The lower solid lines in Fig. 3 show the sensitivity limits of ACTs assuming that $`𝒞=40`$. A Čerenkov telescope operating with a 50 GeV threshold would detect a neutralino–induced gamma–ray emission from the giant elliptical galaxy M87 for a part of the supersymmetric configurations outlined in the upper–left panel of Fig. 3. Even with the annihilation rate enhanced by a factor of 40, the gamma ray lines are out of reach. ###### Acknowledgements. We wish to thank E. Nuss for the information which he provided to us as well as for stimulating discussions. We thank D. Finkbeiner for assistance in using the dust maps of . During his visits to Annecy, E. Baltz has been supported in part by the Programme National de Cosmologie. At Berkeley E. Baltz is supported by grants from NASA and DOE.
no-problem/9909/hep-th9909176.html
ar5iv
text
# 1 Introduction ## 1 Introduction Many physicists are interested in noncommutative geometry, because they expect that it captures some features of quantum gravity. It is intriguing to see that in string theory and M theory, which is considered to be the most promising model of quantum gravity, we come across several occasions in which space-time coordinate becomes noncommutative-. In this note we would like to discuss an example in which noncommutativity of spacetime coordinates appears in string theory. The example we study here is the worldvolume theory of D-branes. As was pointed out in , D$`p`$-branes can be represented as a configuration of infinitely many D$`(p2r)`$-branes. If such a relation hold, the worldvolume theory of the D$`p`$-branes can also be regarded as the worldvolume theory of infinitely many D$`(p2r)`$-branes. We will show that some of the coordinates on the worldvolume of the D$`p`$-branes become noncommutative if one consider it as the worldvolume theory of D$`(p2r)`$-branes. Actually the noncommutative theory we have is noncommutative Yang-Mills theory. Such a noncommutative description of the D$`p`$-branes should be equivalent to the usual commutative descriptions. We will pursue this equivalence and show that these two descriptions correspond to two different way to fix the reparametrization invariance and its generalization on the worldvolume. Therefore we have here an example where a noncommutative theory is equivalent to a commutative theory. Such an equivalence in a similar context was studied in a recent paper. We will comment on the relation between our results and theirs. In section 2, we explain how D$`p`$-branes can be expressed as a configuration of infinitely many D$`(p2r)`$-branes. In section 3, we study the worldvolume theory of the D$`p`$-branes regarding it as a configuration of the D$`(p2r)`$-branes. Section 4 is devoted to discussions. This note is based on a talk presented at “Workshop on Noncommutative Differential Geometry and its Application to Physics”, Shonan-Kokusaimura, Japan, May31-June 4, 1999. After I completed this note, I was informed that there are papers whose results have considerable overlap with ours. Especially, in the first paper, they realized that the commutative and noncommutative descriptions of D-branes correspond to two different ways of gauge fixing, and the explicit relation between the commutative and noncommutative gauge fields, which coincides with ours were given in . ## 2 D$`p`$-branes from D$`(p2r)`$-branes In this section we will explain how D$`p`$-branes can be expressed as a configuration of infinitely many D$`(p2r)`$-branes. For simplicity, the special case of expressing one D$`p`$-brane by D-instantons, namely $`p=2r+1`$ case, will be treated here. We will comment on more general cases at the end of this section. We study this problem in the Euclidean space $`𝐑^D`$. <sup>2</sup><sup>2</sup>2Construction of D$`p`$-branes from D$`(p2r)`$-branes was done on torus in . Things discussed here are partially generalized to the space compactified on a torus in . $`D`$ should be $`26`$ for bosonic string and $`10`$ for superstring. We will first deal with bosonic string theory in which the whole manipulations are simpler. Later we will explain how one can generalize the arguments to the superstring case. The configuration of infinitely many D-instantons can be expressed by the $`\mathrm{}\times \mathrm{}`$ hermitian matrices $`X^M(M=1,\mathrm{},D)`$. The one we consider is $`X^i`$ $`=`$ $`\widehat{P}^i,(i=1,\mathrm{},p+1)`$ $`X^M`$ $`=`$ $`0(M=p+2,\mathrm{},D),`$ (1) where $`\widehat{P}^i(i=1\mathrm{},p+1)`$ satisfy $$[\widehat{P}^i,\widehat{P}^j]=i\theta ^{ij}.$$ (2) In this note, the $`(p+1)\times (p+1)`$ matrix $`\theta `$ is assumed to be invertible. Let us show that this configuration of D-instantons is equivalent to a D$`p`$-brane. In order to show that two different configurations of D-branes are equivalent, one should prove that the open string theory corresponding to these configurations are equivalent. A quick way to see the equivalence is to look at the boundary states. The boundary state $`|B`$ corresponding to the configuration eq.(1) can be written as follows: $$|B=\text{TrP}e^{i_0^{2\pi }𝑑\sigma p_i\widehat{P}^i}|B_1.$$ (3) Here $`p_i(\sigma )`$ is the variable conjugate to the string coordinate $`x^i(\sigma )`$ and is equal to $`\frac{1}{2\pi \alpha ^{}}\dot{x}_i`$ in the usual flat background. $`|B_1`$ denotes the boundary state for a D-instanton at the origin and satisfies $`x^i(\sigma )|B_1=0`$. $`|B_1`$ includes also the ghost part which is not relevant to the discussion here. The factor in front of $`|B_1`$ is an analogue of the Wilson loop and corresponds to the background eq.(1). Eq.(3) can be rewritten by using path integral as $$|B=[dP]e^{\frac{i}{2}{\scriptscriptstyle 𝑑\sigma P^i_\sigma P^j\omega _{ij}}i{\scriptscriptstyle 𝑑\sigma p_iP^i}}|B_1,$$ (4) where $`\omega _{ij}=(\theta ^1)_{ij}`$. It is straightforward to perform the path integral in eq.(4). Using the Fock space representation of $`|B_1`$, the path integral is Gaussian. The determinant factor can be regularized in the usual way , and one can show that $`|B`$ coincides with the boundary state for a D$`p`$-brane with the $`U(1)`$ gauge field strength $`F_{ij}=\omega _{ij}`$ on the worldvolume. Knowing that the path integration is Gaussian, it is easy to confirm this fact. Indeed, one can show that the following identity holds: $`0`$ $`=`$ $`{\displaystyle [dP]\frac{\delta }{\delta P^i(\sigma )}e^{\frac{i}{2}{\scriptscriptstyle 𝑑\sigma P^i_\sigma P^j\omega _{ij}}i{\scriptscriptstyle 𝑑\sigma p_iP^i}}|B_1}`$ (5) $`=`$ $`[i\omega _{ij}_\sigma x^jip_i(\sigma )]|B.`$ Therefore $`|B`$ should coincide with $`\mathrm{exp}[\frac{i}{2}𝑑\sigma x^i_\sigma x^j\omega _{ij}]|B_p`$ up to normalization. Here $`|B_p`$ denotes the boundary state for a D$`p`$-brane satisfying $`p_i(\sigma )|B_p=0`$. Hence $`|B`$ is equivalent to the boundary state for a D$`p`$-brane with the $`U(1)`$ gauge field strength $`F_{ij}=\omega _{ij}`$ on the worldvolume. In the above analysis using the path integral expression eq.(4), the overall normalization of the boundary state is ambiguous. Actually one can prove the equivalence including the normalization by showing that the open string theory corresponding to the configuration eq.(1) is equivalent to the one corresponding to a D$`p`$-brane. We refer to for more details. It is easy to supersymmetrize the above arguments. In the NSR formalism the boundary state for D-instanton can be written as a sum of four states $`|B;\pm _{1,I}`$ where $`I=NS,R`$ indicates the sector it belongs to and $`x^M(\sigma )|B;\pm _{1,I}=0,`$ $`(\psi ^M(\sigma )\pm i\stackrel{~}{\psi }^M(\sigma ))|B;\pm _{1,I}=0.`$ (6) Supersymmetric generalization of eq.(4) can be given for each $`|B;\pm _{1,I}`$ as $$|B;\pm _I=[dPd\chi ]e^{\frac{1}{2}{\scriptscriptstyle 𝑑\sigma (iP^i_\sigma P^j+\chi ^i\chi ^j)\omega _{ij}}{\scriptscriptstyle 𝑑\sigma (ip_iP^i\pi _i\chi ^i)}}|B;\pm _{1,I},$$ (7) where $`\pi ^M(\sigma )=\frac{1}{2}(\psi ^M(\sigma )i\stackrel{~}{\psi }^M(\sigma ))`$. We can show following the same arguments as above that this boundary state coincides with the boundary state for a D$`p`$-brane up to normalization. It is also possible to prove the equivalence of the open string theories . Since the arguments in this section are essentially about the variables $`x^i,\psi ^i(i=1,\mathrm{},p+1)`$ on the worldsheet, it is quite straightforward to apply the arguments here to prove that a D$`p`$-brane can be expressed as a configuration of infinitely many D$`(p2r)`$-branes. It is also easy to generalize the argument to the case of $`N`$ D$`p`$-branes. In such a case we should consider the block diagonal background $$X^i=\widehat{P}^iI_N,$$ (8) where $`I_N`$ is the $`N\times N`$ identity matrix which is an element of $`U(N)`$ Lie algebra. The expression of the boundary state in eq.(4) should be modified to $$|B=[dP]\text{TrP}e^{\frac{i}{2}{\scriptscriptstyle 𝑑\sigma P^i_\sigma P^j\omega _{ij}}i{\scriptscriptstyle 𝑑\sigma p_iP^i}}|B_1,$$ (9) where TrP here means the trace of the path-ordered product with respect to the $`U(N)`$ indices. It is easy to see that this configuration corresponds to $`N`$ D$`p`$-branes following the same arguments as above. ## 3 Worldvolume Theory In the previous section, we explained that the open string theory corresponding to the configuration of D-instantons in eq.(1) is equivalent to the one for a D$`p`$-brane with $`F=\omega `$. This means that the worldvolume theory of one D$`p`$-brane can also be described as the worldvolume theory of D-instantons. In this section, we investigate the worldvolume theory from two different points of view, i.e. either as the worldvolume theory of one D$`p`$-brane or D-instantons. We call them D$`p`$-brane picture and D-instanton picture respectively. We will show how these two are related with each other. The argument in this section will be done for bosonic string case for simplicity. For superstring case, similar results can be derived starting from the expression eq.(7). Our argument in this section can be applied to study the worldvolume theory of D$`p`$-branes by regarding it as the worldvolume theory of infinitely many D$`(p2r)`$-branes for $`2r<p+1`$. It is also straightforward to deal with the case when the number of the D$`p`$-brane is more than one. We will comment on these generalizations at the end of this section. ### 3.1 D$`p`$-brane Picture Let us start from the following expression of the boundary state for the D$`p`$-brane: $$|B=[dP]e^{\frac{i}{2}{\scriptscriptstyle 𝑑\sigma P^i_\sigma P^j\omega _{ij}}i{\scriptscriptstyle 𝑑\sigma p_iP^i}}|B_1.$$ (10) This corresponds to a D$`p`$-brane longitudinal to $`x^i(i=1,\mathrm{},p+1)`$ directions with the $`U(1)`$ gauge field strength $`F=\omega `$. The worldvolume theory of a D$`p`$-brane consists of a gauge field $`A_i`$ and scalar fields $`\varphi ^M(M=p+2,\mathrm{},D)`$. $`\varphi ^M`$ correspond to the shape of the worldvolume which can be expressed by the equation $`x^M=\varphi ^M(x^1,\mathrm{},x^{p+1})`$. We are considering here the field configurations in the static gauge, so that the coordinates on the worldvolume are taken to be $`x^1,\mathrm{},x^{p+1}`$. The boundary state corresponding to a configuration of $`A_i,\varphi ^M`$ is easily guessed to be $$|B=[dP]\mathrm{exp}[i𝑑\sigma A_i(P)_\sigma P^ii𝑑\sigma (p_iP^i+\underset{M=p+2}{\overset{D}{}}p_M\varphi ^M(P))]|B_1.$$ (11) Indeed this coincides with eq.(10) when $`F=\omega ,\varphi ^M=0`$. For small deformations $`\delta A_i,\delta \varphi ^M`$ from the background $`F=\omega ,\varphi ^M=0`$, we expect that the boundary state $`|B`$ in eq.(10) is modified by the vertex operator as $`(1+i{\displaystyle 𝑑\sigma (\delta A_i(x)_\sigma x^i\delta \varphi ^M(x)p_M)}){\displaystyle [dP]e^{\frac{i}{2}{\scriptscriptstyle 𝑑\sigma P^i_\sigma P^j\omega _{ij}}i{\scriptscriptstyle 𝑑\sigma p_iP^i}}|B_1},`$ (12) which is consistent with eq.(11). Moreover since $`x^M(\sigma )|B_1=0`$, this boundary state exactly describes the emission of a closed string from the worldvolume $`x^M=\varphi ^M(x^1,\mathrm{},x^{p+1})`$. The contribution of the gauge field is in the form of the Wilson loop. This state will be BRS invariant for only those $`A_i,\varphi ^M`$ satisfying the equations of motion. $`Q_B|B=0`$ implies that the path integral measure is invariant under the reparametrization $`\sigma \sigma ^{}(\sigma )`$. Imposing such a condition, one may be able to deduce the equations of motion after calculations similar to . Thus in this picture, the worldvolume theory is a $`U(1)`$ gauge theory with scalars $`\varphi ^M`$. In this note, we always assume that Pauli-Villars regularization on the worldsheet is taken so that the noncommutativity because of the regularization discussed in does not occur. ### 3.2 D-instanton Picture Now let us consider the worldvolume theory as the worldvolume theory of D-instantons. The boundary state eq.(10) corresponds to the configuration eq.(1) of D-instantons. General configuration of D-instantons can be described as $$X^M=\varphi ^M(\widehat{P})(M=1,\mathrm{},D),$$ (13) if one assumes that the operators $`\widehat{P}^i(i=1,\mathrm{},p)`$ generate all the possible operators acting on the Chan-Paton indices of D-instantons. Here in defining the functions $`\varphi ^M`$, we need to specify the ordering of the operators $`\widehat{P}^i`$, which will be given shortly. What will be the form of the boundary state corresponding to the configuration eq.(13)? A natural guess is $$|B=[dP]e^{\frac{i}{2}{\scriptscriptstyle 𝑑\sigma P^i_\sigma P^j\omega _{ij}}i{\scriptscriptstyle 𝑑\sigma p_M\varphi ^M(P)}}|B_1.$$ (14) For small deformations $`\delta \varphi ^M`$ from the background eq.(1), we expect that the boundary state in eq.(10) is modified as $`{\displaystyle [dP](1i𝑑\sigma \delta \varphi ^M(P)p_M)e^{\frac{i}{2}{\scriptscriptstyle 𝑑\sigma P^i_\sigma P^j\omega _{ij}}i{\scriptscriptstyle 𝑑\sigma p_iP^i}}|B_1}.`$ (15) Since the vertex operators corresponding to the transverse deformations $`\varphi ^M(M=p+2,\mathrm{},D)`$ coincides with those in eq.(12), we expect that the transverse $`\varphi ^M`$ appear in the same way as in eq.(11). Eq.(14) is unique choice satisfying this condition and the $`D`$-dimensional rotational invariance. The boundary state eq.(14) describes the emission of a closed string from the hypersurface $`X^M=\varphi ^M(P)`$. Therefore the fields $`\varphi ^M(P)`$ correspond to the shape of the D-branes and $`P^i`$ play the role of the coordinates of the $`p`$-brane. In order to be consistent with the path integral form eq.(14), the ordering in eq.(13) should be chosen to be Weyl-ordering. To be more explicit, for each c-number function $`f(P)`$, let us define the Weyl-ordered function $`f(\widehat{P})`$ to be $$f(\widehat{P})=d^{p+1}ke^{ik_i\widehat{P}^i}\stackrel{~}{f}(k),$$ (16) where $$\stackrel{~}{f}(k)=\frac{d^{p+1}P}{(2\pi )^{p+1}}e^{ik_iP^i}f(P).$$ (17) Then $`\varphi ^M(\widehat{P})`$ on the right hand side of eq.(13) should be understood to be the Weyl-ordered function corresponding to the c-number function $`\varphi ^M(P)`$ in eq.(14). The action and other physical quantities in the worldvolume theory of D-instantons are written as a trace of a function of the Weyl-ordered operators in eq.(16). However it is more convenient for us to rewrite everything in terms of the c-number functions $`\varphi ^M(P)`$ in eq.(17). We can do so by using the following formula: $$Tr(f_1(\widehat{P})f_2(\widehat{P})\mathrm{}f_n(\widehat{P}))=\frac{d^{p+1}P}{(2\pi )^{(p+1)/2}\sqrt{|\text{det}\theta |}}f_1(P)f_2(P)\mathrm{}f_n(P).$$ (18) Here the $``$-product is defined as $$f(P)g(P)=e^{\frac{i}{2}\theta ^{ij}\frac{}{\xi ^i}\frac{}{\zeta ^j}}f(P+\xi )g(P+\zeta )|_{\xi =\zeta =0}.$$ (19) Hence, a trace of a function of Weyl-ordered operators can be rewritten in terms of the corresponding c-number functions by replacing product of operators by $``$-product of the corresponding c-number functions and trace by integral. Thus if one regards the worldvolume theory as a theory of D-instantons, the description should be noncommutative. $`P^i`$ can be considered as the coordinates on the $`p`$-brane and they are noncommutative under the $``$-product reflecting the commutation relation eq.(2). Now let us discuss what kind of theory this noncommutative field theory is. The Lagrangian of the worldvolume theory of D-instantons can be written in terms of the commutators of the matrices $`\varphi ^M(\widehat{P})`$. Since we started from the background in eq.(1), let us express $`\varphi ^i(i=1,\mathrm{},p+1)`$ in the form of the background and the fluctuations around it: $$\varphi ^i(\widehat{P})=\widehat{P}^i+\theta ^{ij}a_j(\widehat{P}).$$ (20) The c-number expression corresponding to the commutators of $`\varphi ^i(\widehat{P})`$ are easily calculated to be $$[\varphi ^i(\widehat{P}),\varphi ^j(\widehat{P})]i\theta ^{ij}i(\theta f\theta )^{ij},$$ (21) where $$f_{ij}=_ia_j(P)_ja_i(P)ia_ia_j(P)+ia_ja_i(P).$$ (22) $`f_{ij}`$ can be considered as the field strength of a noncommutative Yang-Mills field $`a_i`$. $`\varphi ^i(\widehat{P})`$ essentially corresponds to the covariant derivative $`+ia`$. Thus the commutators of $`\varphi ^i`$ with other fields give the covariant derivative of these fields. Other commutators are interpreted as the gauge covariant commutators in the noncommutative Yang-Mills theory. Since the Lagrangian is written in terms of these commutators, the noncommutative theory we have here can be considered as noncommutative Yang-Mills theory . The gauge invariance of the theory stems from the transformation $$\delta \varphi ^M(\widehat{P})=i[ϵ,\varphi ^M(\widehat{P})].$$ (23) As a theory of D-instantons this is the $`U(\mathrm{})`$ transformation under which the theory should is invariant. In the c-number formulation such a transformation corresponds to the coordinate transformation $$\delta P=\theta ^{ij}_jϵ(P).$$ (24) This is the coordinate transformation which preserves $`\omega =\theta ^1`$. If one regards $`\omega `$ as a symplectic form, such transformations are the canonical transformations. The invariance under the canonical transformation will be discussed in the next subsection. ### 3.3 Relation between the Two Pictures In the previous subsections we obtain two points of view about the worldvolume theory. Since they are supposed to describe the same thing, there should be correspondence between the two. In the D$`p`$-brane picture, the worldvolume fields are the gauge field $`A_i(i=1,\mathrm{},p+1)`$ and scalars $`\varphi ^M(M=p+2,\mathrm{},D)`$, where the coordinates on the worldvolume is taken to be $`x^i(i=1,\mathrm{},p+1)`$. On the other hand, the worldvolume fields in the D-instanton picture are $`\varphi ^M(P)(M=1,\mathrm{},D)`$ and $`P^i`$ are the coordinates on the worldvolume. As we noticed in the previous section, the fields $`\varphi ^M(M=p+2,\mathrm{},D)`$ common to both correspond to each other. Therefore we should find how the fields $`A_i`$ and $`\varphi ^i`$ are related. Let us first consider small deformations $`\delta A_i,\delta \varphi ^i`$ from the background eq.(1). From eq.(12), one can see that $`\delta A_i`$ changes the boundary state as $$\delta |B=i[dP]𝑑\sigma \delta A_i(P)_\sigma P^ie^{\frac{i}{2}{\scriptscriptstyle 𝑑\sigma P^i_\sigma P^j\omega _{ij}}i{\scriptscriptstyle 𝑑\sigma p_iP^i}}|B_1,$$ (25) which should be compared with the variation corresponding to $`\delta \varphi ^i`$ from eq.(15): $$\delta |B=i[dP]𝑑\sigma \delta \varphi ^i(P)p_ie^{\frac{i}{2}{\scriptscriptstyle 𝑑\sigma P^i_\sigma P^j\omega _{ij}}i{\scriptscriptstyle 𝑑\sigma p_iP^i}}|B_1.$$ (26) The relation between these two variations can be derived from the following identity $`0`$ $`=`$ $`{\displaystyle [dP]\frac{\delta }{\delta P^i(\sigma )}e^{\frac{i}{2}{\scriptscriptstyle 𝑑\sigma P^i_\sigma P^j\omega _{ij}}i{\scriptscriptstyle 𝑑\sigma p_iP^i}}|B_1}`$ (27) $`=`$ $`{\displaystyle [dP][i\omega _{ij}_\sigma P^jip_i]e^{\frac{i}{2}{\scriptscriptstyle 𝑑\sigma P^i_\sigma P^j\omega _{ij}}i{\scriptscriptstyle 𝑑\sigma p_iP^i}}|B_1},`$ which implies that $`\delta |B`$ in eqs.(25)(26) coincide with each other when $`\delta A_i=\omega _{ij}\delta \varphi ^j`$. Such a relation was given in . In order to see the relation between the two pictures for finite deformations, the most convenient way is to consider a description involving fields $`A_i`$ and $`\varphi ^M(M=1,\mathrm{},D)`$. From eqs.(11)(14), the boundary state involving all these fields should be $$|B=[dP]e^{i{\scriptscriptstyle 𝑑\sigma A_i(P)_\sigma P^i}i{\scriptscriptstyle 𝑑\sigma p_M\varphi ^M(P)}}|B_1.$$ (28) However there are two many fields in such a description and there should be symmetry to reduce the number of them. The boundary state eq.(28) is invariant under gauge transformation $`\delta A_i=_i\lambda `$. Moreover, in the D$`p`$-brane picture, considering $`A_i`$ and $`\varphi ^M`$ just means considering the theory before the gauge fixing of reparametrization invariance. Therefore the theory should have reparametrization invariance. Indeed we can argue that the boundary state eq.(28) is invariant under the following transformation $`\delta A_i(P)=ϵ^j(P)F_{ji}(P),`$ $`\delta \varphi ^M(P)=ϵ^i(P)_i\varphi ^M(P),`$ (29) because the variation is proportional to a sum of the equations of motion for $`P^i`$. This transformation coincides with the coordinate transformation up to field-dependent gauge transformation because $$\delta A_i(P)=ϵ^j_jA_i_iϵ^jA_j+_i(ϵ^jA_j).$$ (30) Since the boundary state eq.(28) is gauge invariant, it is reparametrization invariant. Therefore the description involving $`A_i`$ and $`\varphi ^M(M=1,\mathrm{},D)`$ is invariant under the reparametrization on the worldvolume. The D$`p`$-brane picture obviously corresponds to the static gauge. On the other hand, one can see from eq.(14) that the D-instanton picture apparently corresponds to the gauge $`F_{ij}=\omega _{ij}`$. We are not sure if such a gauge can be taken for arbitrary configuration of the gauge field $`F`$, but at least when we are thinking about the fluctuation from the background $`F=\omega `$ perturbatively, it seems all right. Such a gauge does not fix the whole reparametrization invariance on the worldvolume. The residual invariance consists of the coordinate transformation preserving $`\omega _{ij}`$, i.e. the canonical transformation. We saw such an invariance as the c-number counterpart of the $`U(\mathrm{})`$ invariance in the previous subsection. Since the difference is from the gauge choice, we can give the explicit relation between the static gauge variables $`A_i(x),\varphi _{st}^M(x)(M=p+2,\mathrm{},D)`$ and the variables $`\varphi _{nc}^M(P)(M=1,\mathrm{},D)`$ <sup>3</sup><sup>3</sup>3Here the subscript $`st`$ and $`nc`$ are for distinguishing $`\varphi ^M`$ in different gauges. in the noncommutative description at least classically, i.e. for small $`\theta `$,: $`\varphi _{nc}^M(P)=\varphi _{st}^M(\varphi _{nc}^1(P),\mathrm{},\varphi _{nc}^{p+1}(P))(M=p+2,\mathrm{},D),`$ (31) $`\omega _{ij}=F_{kl}(\varphi _{nc}^1(P),\mathrm{},\varphi _{nc}^{p+1}(P)){\displaystyle \frac{\varphi _{nc}^k}{P^i}}{\displaystyle \frac{\varphi _{nc}^l}{P^j}}.`$ (32) Eq.(32) can be rewritten in terms of the noncommutative Yang-Mills field $`a_i`$ using eq.(20) as $$(\theta ^1)_{ij}=(M^TF(P+\theta a)M)_{ij},$$ (33) where $$M_j^i=\delta _j^i+\theta ^{ik}_ja_k(P).$$ (34) This gives an explicit relation between commutative gauge field $`A_i(x)`$ and noncommutative gauge field $`a_i(P)`$. Now let us comment on two generalizations of our results in this section. First one is to consider more than one D$`p`$-branes. Starting from the background eq.(8), we can follow the arguments of $`N=1`$ case. This time all the fields $`A_i`$ and $`\varphi ^M`$ are in the adjoint representation of $`U(N)`$. We should put TrP in front of the right hand sides of eqs.(10),(11),(14),(28) and then we can follow the same arguments as $`N=1`$ case. The reparametrization invariance of the boundary state in this case can be derived as follows. The path-ordered trace version of eq.(28) can be rewritten by introducing fermions $`\psi `$ in the fundamental representation $`U(N)`$ as $$|B=[dPd\psi ]\mathrm{exp}[𝑑\sigma \psi ^{}_\sigma \psi i𝑑\sigma A_i^a(P)_\sigma P^i\psi ^{}t^a\psi i𝑑\sigma p_M\varphi ^{M,a}(P)\psi ^{}t^a\psi ]|B_1.$$ (35) Here $`t^a`$ are the generators of $`U(N)`$ in the fundamental representation. In this form, we can see that the boundary state is invariant under the transformation $`\delta A_i(P)=ϵ^j(P)F_{ji}(P),`$ $`\delta \varphi ^M(P)=ϵ^i(P)D_i\varphi ^M(P),`$ (36) because the variation is proportional to the equations of motion for $`P^i,\psi `$. This transformation is again equivalent to the coordinate transformation up to field-dependent gauge transformation. Moreover we can argue that the boundary state is invariant under $`\delta A_i(P)=ϵ^j(P)t^aF_{ji}(P),`$ $`\delta \varphi ^M(P)=ϵ^i(P)t^a_i\varphi ^M(P),`$ (37) because the variation is proportional to $`lim_{\sigma ^{}\sigma }\psi ^{}t^a\psi (\sigma ^{})\times (\text{equations of motion})(\sigma )`$. This transformation can be considered to be a nonabelian generalization of coordinate transformation up to field-dependent gauge transformation. Fixing such invariance by taking static gauge or $`F=\omega `$ and we get commutative or noncommutative descriptions respectively. Secondly it is also possible to study the worldvolume theory of D$`p`$-branes regarding it as the worldvolume theory of infinitely many D$`(p2r)`$-branes. We obtain a description in which some of the coordinates on the worldvolume become noncommutative. ## 4 Discussions In this note, we have shown that D$`p`$-branes with constant field strength $`F_{ij}`$ can be represented as a configuration of infinitely many D$`(p2r)`$-branes. The worldvolume theory of the D$`p`$-branes can be analysed by regarding it as the worldvolume theory of the D$`(p2r)`$-branes and we obtain a noncommutative description of the worldvolume theory. The system we studied here is gauge equivalent to the one studied in . In that paper, D-branes in constant $`B_{ij}`$ background was considered and the authors get commutative and noncommutative descriptions of the worldvolume theory depending on the regularization. Moreover they propose descriptions with continuously varying $`\theta `$ which connect the commutative and noncommutative descriptions. Actually we can realize such descriptions also in our formalism by considering our system in constant $`B_{ij}`$ background . Therefore we suspect that our noncommutative description is equivalent to a choice of $`\theta `$ in . Since we have an explicit relation between the commutative and noncommutative descriptions which is valid for small $`\theta `$, it may be interesting to compare our relation and theirs. Here we just comment on one crucial difference. In , the gauge transformations of the commutative and noncommutative descriptions are related but in the relation we obtained the gauge invariance of noncommutative theory is the residual reparametrization invariance which is not related to the commutative gauge invariance. Therefore the relation we obtained in eq.(32) is for gauge invariant quantities. Since we have an example in which there is a relation between commutative and noncommutative theory, it may be generalized and be used in studying other noncommutative theories. One example is the noncommutative geometric formulation of open string field theory. We think that the relation we studied in this note may be relevant in revealing symmetries hidden in the string field theory. We hope that we can come back to this problem in the future. ## Acknowledgements We would like to thank the organizers of the workshop for the wonderful workshop. I am grateful to H. Aoki, S. Iso, H. Kawai, Y. Kitazawa and T. Tada for collaborations and K. Okuyama for discussions. This work was supported by the Grant-in-Aid for Scientific Research from the Ministry of Education, Science and Culture of Japan.
no-problem/9909/hep-ph9909528.html
ar5iv
text
# Contents ## 1 Introduction A concept of a magnetic monopole has been introduced into modern physics in 1931 by Paul Dirac . He postulated existence of an isolated magnetic charge $`g`$. Using general principles of quantum mechanics, he has related the electric and magnetic charge values: $`ge=\frac{n}{2}\mathrm{}c`$, where $`e`$ is the electron electric charge, $`\mathrm{}`$ is the Plank constant, $`c`$ is the speed of light, $`n=\pm 1,2\mathrm{}`$ is an integer. Numerous but unsuccessful attempts of experimental search for these magnetic monopoles at accelerators and in cosmic rays have been done since then. The new interest to this problem has arised in 1974, when Polyakov and ’t Hooft have shown that such objects exist as solutions in a wide class of models with spontaneously broken symmetry . The magnetic charge of the Polyakov—’t Hooft monopole is a multiple of the Dirac one $`g=2ne/\alpha `$, and the value of its mass $`M_g`$ lies in the range of $`10^8`$$`10^{16}`$ GeV. Such GUT’s monopoles are not point-like and have complex structure It is completely clear, that the Polyakov—’t Hooft massive monopoles can not emerge at accelerators, therefore we do not consider accelerator experiments. Moreover, we assume that the monopoles that reach the surface of the Earth are gravitationally bound up either with the Galaxy (if $`\beta =v/c<10^3`$) or with the Sun (when $`\beta <10^4`$). The monopole ionization loss in matter has been evaluated by many authors (look at the reviews: ). For fast monopole the ionization loss appreciably (about 4700 times!) exceeds the loss for the minimum ionizing particles — MIPs, which is $`dE/dl2MeV/g`$. In units of the ionizing loss of particle with charge e, the monopole ionization loss is given by: $$\left(\frac{dE}{dl}\right)_g=\left(\frac{dE}{dl}\right)_e\left(\frac{g}{e}\right)^2\beta ^2.$$ (1) If we recollect that ionization loss of a charged particle is proportional to $`1/\beta ^2`$, then it is clear, that the loss of a monopole does not depend on velocity. It should be pointed out here that in GUT the monopole is a very heavy particle. Therefore supermassive monopole is practically always non-relativistic! When $`\beta 10^3`$ the ionization loss of a monopole decreases to level of energy loss of MIP. At $`\beta <10^4`$ the ionization mechanism of the energy loss is switched off practically, because in this case the energy of monopole-atom collision already is not enough for ionization of the latter. To estimate the maximum of monopole velocity $`v`$, it is natural to take the velocity of the Sun relatively to the background radiation $$\beta =\frac{v}{c}10^3.$$ (2) We shall remind here, that the virial velocity for our Galaxy does not exceed $`10^3c`$ too. For detection of a slow monopole with efficiency close to 1, the superconducting induction detectors were designed and constructed. The single event, registered by the Cabrera detector, has originated the burst of experimental activity of searches for monopoles of cosmic origin. However, the significant progress in sensitivity (and corresponding limits on the monopole flux) has been achieved only for ionizing detectors . Sensitivity of these detectors is close to extended Parker bound : $``$ $`110^{16}(m/10^{17}GeV)cm^2s^1sr^1.`$ One of the primary aims of the MACRO detector at the Gran Sasso underground lab (in Italy, at an average depth of 3700 hg/cm<sup>2</sup>) is the search for magnetic monopoles for a large range of velocities at a sensitivity level well below the Parker bound for a large range of velocities, $`410^5<\beta <1`$, $`\beta =v/c`$. MACRO uses three different types of detectors: liquid scintillators, limited streamer tubes and nuclear track detectors (CR39 and Lexan) arranged in a modular structure of six “supermodules”. They consider magnetic monopole with enough kinetic energy to traverse the Earth $`410^5<\beta <1`$, $`\beta =v/c`$. Another possibility of detection of a slow GUT’s monopole is the search for proton decay induced by a heavy monopole , . Recently the group working with the Baikal lake Cherenkov detector has set the following limit on the flux of heavy magnetic monopoles and the Q-balls, which are able to induce the proton decay $``$ $`<3.910^{16}cm^2s^1sr^1.`$ Here we consider a possibility of building-up of a new type of detector of slow monopoles. Our idea is based on registration of interaction of a slow cosmic-ray-related monopole with the film of easy-axis and high coercitivity ferromagnet. As a sensitive element of such a detector one can use an advanced storage media, namely – the magneto-optical disk (MO disk) . (To our knowledge – for modern MO disks an areal density of $`45Gbit/in^2`$ has been demonstrated using near-field techniques, with a theoretical possibility in excess of $`100Gbit/in^2`$). The slow monopole which is passing through the magnetic coat of MO disk, leaves a distinctive magnetic track, and this track can be detected. It is important to note that considerable surface can be covered by MO disks. They can be exposed any reasonable time without any maintenance, like in emulsion chamber experiments or the CR39 nuclear track subdetector of MACRO . Apparently, the use of such passive detectors will be especially effective in search of the relict monopoles, entrapped in ferromagnetic inclusions of iron ore. Such monopoles should be extracted from the ore during the melting process. These monopoles are extracted at relatively small cross-section of the furnace and freely fall downward. Such slow moving monopoles can be detected by a passive detector with MO disks. These disks are to be placed, e.g. in a cavity under the furnace. The effective exposition time, normalized to the mass of ore, from which the monopoles are extracted, can be very large. We wish note, that during the exposure no detector service is required. After exposure the MO disks should further be placed into specialized device to find the traces of the magnetic monopole. Ours real, but for the future aim consists in study of a possibility of making of such specialized device. ## 2 Track formation by the slowly moving monopole. It is expected that a slow monopole, moving transversely through a magnetized ferromagnetic film, should leave a distinctive track of magnetization in it. We can use this phenomenon to design an effective detector of supermassive cosmic monopoles. For this purpose we shall consider in detail the mechanism of magnetic track formation by such a type of monopoles . Let us consider a thin layer of easy-axis hard ferromagnetic magnetized perpendicularly to the surface along an easy axis. It is easy to see that the external magnetic field is absent (double layer!), but the surface density of a magnetostatic energy of such configuration is rather large. However, if the anisotropy constant $`K_u`$ is reasonably large, and the effective field of anisotropy exceeds the value of demagnetizing field then the system is in a metastable state. As film magnetization is inversed in a small area, a stable cylindrical domain (magnetic bubble) of certain radius is created. Such one domain state is characterized by an equilibrium radius $`r_{eq}`$ and a collapse radius $`r_c`$ where $`r_{eq}>r_c`$. As the domain radius is decreasing and approaching to the collapse one, $`r_c`$, the magnetic bubble becomes unstable and disappears. A typical magnetic bubble size is about 30 nm for film thickness of 100 nm. What is the field of a monopole at such distance? $$H=\frac{\mathrm{\Phi }_0}{4\pi r_0^2}210^3Oe,$$ (3) that is enough without any doubt for re-magnetization of a material with coercitivity of the order of 1 Oe. We have introduced here some characteristic radius $$r_0=2\mu _0\gamma /I_s^2$$ which defines the minimal radius of collapse. All above-mentioned is true for films with high mobility of domain walls. Films with low wall mobility are even more stable, and at the same time, domains with radius less than the collapse radius can exist in them, in principle. So, in the $`Co/Pt`$ films the movement of domain walls is suppressed. And in the 20 nm film the transverse domains of cylindrical form with diameter of the order of 50-100 nm are obtained. Let us note, that the coercitivity of the easy-axis $`Co/Pt`$ film is of the order of 1-2 kOe . However, our speculations are true only in static, for very slow monopoles only. As we have noted before, the characteristic velocity of a monopole is $`v10^410^3c`$, and for our consideration let us assume $`v=10^4c`$. The time of monopole interaction with an electron $`\tau `$ can be defined as the time, during which a field higher than some critical field $`H_c`$ interacts with the electron $$\tau \frac{r_c}{v}\frac{1}{v}\sqrt{\frac{\mathrm{\Phi }_0}{4\pi H_c}}.$$ (4) At $`H_c`$ of the order $`310^3`$ Oe we have $`\tau 310^{12}s.`$ It means, that the spin flip of the magnetic in the ”track” takes place during the interaction time. For such spin flip, the adiabatic condition is necessary — the frequency of spin precession in the overturning field should be much larger than the inverse time of the interaction $$\omega =\frac{\mu _BH}{\mathrm{}}\frac{1}{\tau }.$$ (5) It is possible to derive from here the minimal magnetic field which is appropriate for the adiabatic mode, and the track radius: $$HH_c=\frac{\mathrm{}}{\mu _B\tau }=\frac{4\pi \mathrm{}^2v^2}{\mu _b^2\mathrm{\Phi }_0},$$ (6) $$R_tr_c=\sqrt{\frac{\mathrm{\Phi }_0}{4\pi H_c}}.$$ (7) In our case at $`v10^4c`$ we get $$H_c10^7Oe;r_c10^7cm,$$ and for $`v10^6c,`$ we have $$H_c10^3Oe;r_c10^5cm.$$ It is obvious, that the conditions of adiabatic and even resonant spin flip are not fulfilled, while $`r_c<r_0`$, that corresponds to the monopole speed $`v10^6c`$. We shall consider the influence of the conductivity of the film material now. The reason is that the monopole magnetic flux is being frozen into the cylindrical area around the track axis and then spreads radially. The radius of the flux pipe and the diffusion factor of the flux are determined by the film conductivity. The radius of the flux pipe $`\delta _c`$, we can find: $$\delta _c=\frac{c^2}{2\pi \sigma \mu v}.$$ (8) This length has a simple physical meaning. At distances less than $`\delta _c`$ the monopole field can be considered as free. At distances of the order of $`\delta _c`$ and more, the magnetic tail of the monopole is formed, which is an analogue of a string in a superconductor. Due to the finite conductivity of material, the tail spreads gradually, and the energy of the magnetic field converts into heat. At the monopole velocity about $`v10^4c`$, the flux pipe has the radius of the order of $`10^5cm`$. The flux pipe spreads in time as: $$R(t)=\delta _c\sqrt{\frac{t}{\tau }}.$$ (9) Thus the magnetic moment of the track is conserved, as well as the frozen flux. It is easy to calculate the average intensity of the magnetic field in the flux pipe immediately after monopole flight $$H=\frac{\mathrm{\Phi }_0}{\pi \delta _c^2}10^3Oe.$$ (10) The typical time of the monopole interaction with an electron in a conductor is: $$\tau \frac{\delta _c}{v}10^{11}10^{10}s,$$ (11) and the field strength, providing the adiabatic inversion of the magnetic spin in a track, will be: $$H_c\frac{\mathrm{}}{\mu _B\tau }10^4Oe.$$ (12) Thus, the frozen field in the conductor $`H_c`$ can effect appreciably the process of spin flip in the track and provide the adiabatic spin flip of electrons in the magnetic at monopole speeds below $`10^4c`$. ## 3 Detection of the monopole track. As it was shown, the domain induced by the moving monopole has the typical size of the order of 50 nm and magnetization about several thousand Gauss. Then the domain magnetic flux $`\mathrm{\Phi }_d`$ will be of the order: $`\mathrm{\Phi }_d=\pi r^2I_s\mathrm{\Phi }_0=210^7Gcm^2`$ For detection of such a flux we can use the high sensitive fluxmeter on the basis of superconducting quantum interference device — SQUID, or magneto-optical device on the basis of Kerr effect (rotation of the polarization plane of light reflected by a surface of a ferromagnet which is magnetized perpendicular to the surface). It is clear that the second way is technically easier and does not require a cryogenic maintenance. In the later case, the realization of such a detector requires a surface covered with a thin layer of easy-axis magnetized magnetic media, plus a magnetooptic device to detect the spots of transverse magnetization of the film (to detect the domain of opposite direction of magnetization!) with a system of precise positioning. A similar technique has emerged recently in an almost ready shape, suitable for the detector design with minimal adjustment. It is the magneto-optic recording technology used in modern magneto-optic disks (MO disks) and their readout devices. Already there are MO disks with multilayer coating of $`Sm/Co`$ and $`Pt/Co`$. The coercitivity of multilayer coats $`Pt/Co`$ is about $`1KOe`$ at 10 layers of total thickness about $`15nm`$ . The size of a magnetic bubble which can be detected in such a coat by the magneto-optical method is about $`60nm`$. This technique using the near-field optics has been designed, f.i. in Bell Laboratories . The coercitivity of coats with $`Sm/Co`$ is in the interval $`35kOe`$, and the reference size of magnetic bubble is $`50nm`$ . For detection of the magnetic track of the monopole it is possible to use slightly modified standard MO-drives. Having covered a large enough surface with such MO disks, we can obtain an effective and relatively nonexpensive detector of slow moving space monopoles, which we can expose during unlimited time. However, it is more effective to use the MO-detector to search for relict monopoles, entrapped in ferromagnetic inclusions of iron ore. Naturally, the melted iron ore becomes paramagnetic and the ferromagnetic traps disappears. Then the monopole is likely to be surrounded by a cluster of several dozens of iron atoms. The size of a complex is determined by thermodynamic equilibrium: $$\frac{\mu _{Fe}g}{r_{Fe}^2}=\frac{3}{2}kT$$ (13) The radius of the iron atomic complex paramagnetically bound to the monopole at $`T1200^0C`$ is: $$r_{Fe}610^8cm.$$ (14) Such complexes contains about 30 atoms of iron. Considering, that the movement of such small blob is determined by the Stokes law: $$F_v=6\pi r_{Fe}\eta v,$$ (15) where $`v`$ is the monopole velocity, $`\eta `$ is the dynamic viscosity of liquid foundry iron, $`\eta =210^3kg/(msec)`$ at $`T=1250^0C`$. Equating the force of friction to the gravity $`F_g=mg`$, we find the velocity $`v`$ of the monopole falling through the melt $$v=\frac{mg}{6\pi r_{Fe}\eta },$$ (16) that makes $`v310^1m/sec`$ for a monopole with mass about $`10^{15}GeV`$. This corresponds to kinetic energy of the complex about $`10^6eV`$, which is large enough for a skinning off of the complex at the solid bottom of the furnace. Let’s remark, that from a formal point such an approach is quite acceptable, as the Reynolds number in our case it is not large enough: $`Re<10^3`$. This grain (complex) should sink in the liquid iron at 10-100 cm/s velocity until it reaches the bottom of the blast furnace. Then the atoms of iron are stripped off the monopole in the material of the oven bottom, and the monopole falls further, accelerating up to the velocity of sound in the matter. We should have in mind, that the energy loss of monopole due to the Cherenkov radiation of phonons and magnons in a medium reach $`10^8eV/cm`$ and even more, as it was shown earlier ,. This is a main point to understand, that the gravitation forces cannot accelerate the monopole up to velocity higher than the speed of sound in a medium. Usually the blast furnace melts about 10 000 tons of ore per day, and it is possible without great problems to expose the MO-detector, for example, during one year. Thus we hope that such MO-detector can improve significantly the experimental limit on the density of relict monopoles entrapped in iron ore, which today is equal to $`\rho _m<210^7/g`$ . Furthermore, in a sinter machine the ore is also heated above the Curie temperature, but not up to the melting point. So, to shake off the iron atoms, we have to kick the iron ore pieces with acceleration of $`10100g.`$ Clearly, in this last case the probability of monopole release is considerably lower. ## 4 Conclusion. The interaction of monopoles with films of magnetic materials is considered. In particular, the interaction of slow monopoles with thin films of easy-axis magnetics with high and low mobility of domain walls (materials with magnetic bubbles)is discussed. It is shown, that during the movement of a slow monopole through the magneto-hard magnetic film, a track-domain can be formed with typical size of about 50 nm and with magnetization of about several thousand Oe. Thus the magnetic flux of the track appears to be about the value of the flux quantum. For detection of such a flux, the detectors using fluxmeter on the basis of already widely known SQUID can be used. It appears that for registration of traces of slow cosmic monopoles in magnetic matter, the experimental devices using the Kerr magneto-optical effect are more appropriate. They have emerged recently in a shape suitable for detector design, with an appropriate adjustment. It should be realized that such passive detectors will be especially effective in search of relict monopoles, entrapped in ferromagnetic inclusions of iron ore. These monopoles should be extracted from the ore during melting process. Then these slow moving monopoles can be detected by a passive MO detector. We can expose MO disks in a cavity under a blast furnace exactly under the bath with melting metal, where the temperature does not exceed $`+50^0C`$. In the melting process the temperature of ore exceeds the Curie point and its ferromagnetic properties disappear. Hence the ferromagnetic traps which hold the monopoles are ”switched off” and the released monopoles fall through the melting metal to the bottom of the bath and finally through the MO disks. While a monopole, moved downward by the gravity force crosses the surface of one of the MO disks, it leaves a magnetic track in its coat. It is possible to obtain the slow moving relict monopole also by the sinter machine, and usually both installations process no less than 10 000 tons of ore per day. The authors are grateful to all colleagues for helpful and lively discussions of this work in various places and institutes. Special thanks go for stimulating interest and useful remarks to L.M. Barkov, I.B. Khriplovich (all from Budker INP) and V.A. Gordeev (PNPI).
no-problem/9909/hep-th9909195.html
ar5iv
text
# 1 Introduction ## 1 Introduction Branched polymers provide us with one of the simplest generalizations of random walk. The theory of branched polymers is well known (see e.g. and references therein). In a recent paper it is claimed that the standard treatment of the theory of branched polymers, as presented for instance in , is not correct. In this note we show that this claim is false. The problem under discussion concerns the propagation of branched polymers in target space. This is analogous to the propagation of a particle in space-time or the propagation of a bosonic string in 26-dimensional space-time. The grand canonical ensemble provides a convenient framework for developing the theory and this is how it is done in and in many of the papers referred to therein. This approach is claimed to be erroneous in , where the discussion is based on the canonical ensemble. As we explain below, the basic requirement of the authors of on the relation between the two approaches, namely that the correlation functions in the grand canonical ensemble at the critical value of fugacity should reproduce those in the canonical ensemble for large N: $$\underset{N\mathrm{}}{lim}\frac{G_N}{Z_N}=\underset{\mu \mu _c}{lim}\frac{G_\mu }{Z_\mu },$$ (1) is not relevant, and in fact incorrect. In addition, we explain that the three main conclusions of i.e. (i) that the 2-point function $`G_\mu (p)`$ differentiated w.r.t. the “chemical” potential $`\mu `$ behaves like $`1/p^4`$ for large $`p`$, (ii) that the 3-point function behaves like $$G_\mu ^3(p,q)G_\mu (p)G_\mu (q)G_\mu (p+q),$$ (2) and (iii) that the 3-point function differentiated w.r.t. $`\mu `$ is given by $$\frac{dG_\mu ^3(p,q)}{d\mu }G_\mu ^{}(p)G_\mu (q)G_\mu (p+q)+G_\mu (p)G_\mu ^{}(q)G_\mu (p+q)+G_\mu (p)G_\mu (q)G_\mu ^{}(p+q),$$ (3) (Eqs. (47-48) in ), are all obviously correct, and in fact trivial consequences of what the authors call the “naive” approach. However, their claim that only the differentiated three point function is consistent with a correct “thermodynamical” limit is misleading. Eq. (2) is the correct 3-point function, defined in the same way as one would define a 3-point function in string theory, and it does not contain any non-universal part. Eq. (3) is not the 3-point function but (by definition) the derivative of the 3-point function with respect to the chemical potential. Finally, the authors of give some comments concerning the nature of “baby universes” in the theory of branched polymers are which are supposed to support the claims made. We point out that these speculations are mistaken. ## 2 Explanation Let us introduce some notation. The random walk representation of the Euclidean propagator is given by $$G_\mu (x_1,x_2)=\underset{N=1}{\overset{\mathrm{}}{}}e^{\mu N}\underset{i=1}{\overset{N}{}}dy_i\underset{i=0}{\overset{N}{}}f(y_{i+1}y_i),y_{N+1}=x_2,y_0=x_1,$$ (4) where $`f(x)`$ is a suitable weight function which should fall off sufficiently fast. We call (4) the grand canonical partition function for the random walk, $`\mu `$ the chemical potential and $`e^\mu `$ the fugacity. There is a critical value, $`\mu _c`$, for $`\mu `$ above which the sum in (4) is convergent and below which it is divergent. We can write $$G_\mu (x_1,x_2)=\underset{N}{}e^{\mathrm{\Delta }\mu N}G_N(x_1,x_2),\mathrm{\Delta }\mu =\mu \mu _c,$$ (5) and $`G_N(x_1,x_2)`$ is called the canonical partition function for the random walk, since the “internal” volume (number of steps) $`N`$ is fixed. By Fourier transformation we define $`G_\mu (p)`$ and $`G_N(p)`$. Close to the critical point $`\mu _c`$ we have $$G_\mu (p)\frac{1}{\mathrm{\Delta }\mu +p^2},G_N(p)e^{p^2N},$$ (6) such that $$G_\mu (p)𝑑Ne^{\mathrm{\Delta }\mu N}G_N(p).$$ (7) It is clear that the relation (1) is not satisfied for random walks, and there is no reason why it should be satisfied, since the lefthand side is the heat kernel and the righthand side is the propagator. The value of the Hausdorff dimension of the random walk follows from (7). Without going into a detailed derivation (which e.g. can be found in ), we simply note that $`N`$ and $`\mathrm{\Delta }\mu `$ as well as $`p`$ and $`x`$ are conjugate variables. In the scaling limit this leads to $$N_\mu \frac{1}{\mathrm{\Delta }\mu },|x|\frac{1}{(\mathrm{\Delta }\mu )^{\frac{1}{2}}}\mathrm{i}.\mathrm{e}.N_\mu x^2,$$ (8) where $`_\mu `$ is the expectation in an ensemble of walks whose endpoints are separated by a distance $`x`$ in imbedding space. In the case of branched polymers one obtains similarly for $`\mu `$ close to $`\mu _c`$: $$G_\mu (p)\frac{1}{\sqrt{\mathrm{\Delta }\mu }+p^2}.$$ (9) This result is universal (as for the random walk) and contains no non-scaling part. Standard arguments , identical to the ones leading to the Eqs. (8) for the random walk, imply (in the scaling limit): $$N_\mu \frac{1}{\mathrm{\Delta }\mu },|x|\frac{1}{(\mathrm{\Delta }\mu )^{\frac{1}{4}}},\mathrm{i}.\mathrm{e}.N_\mu x^4,$$ (10) which shows that the Hausdorff dimension of branched polymers is four. As for the random walk the 2-point function $`G_\mu `$ for branched polymers does not satisfy (1), and for the same reason as before this fact does not disqualify it as the correct propagator. Of course the large-$`p`$ behaviour of derivatives of $`G_\mu (p)`$ w.r.t. $`\mu `$ is $$\frac{d^lG_\mu (p)}{d\mu ^l}\frac{1}{(\mathrm{\Delta }\mu )^{l\frac{1}{2}}p^4}+O(p^6),$$ (11) for $`l>0`$. The fact that this large-$`p`$ behavior of (11) agrees with the large-$`p`$ behavior of $`G_N(p)`$ derived from (9) does not make it the correct propagator for branched polymers. In fact, it is simply the propagator for branched polymers with $`l`$ marked vertices since each differentiation brings down a factor of $`N`$ which is the number of ways we can choose a vertex to mark. Next let us comment on Eqs. (2) and (3). The derivation of (2), as presented in , is acknowledged in , but the authors object that it does not satisfy (1). We repeat that it is not a relevant objection as we have explained, and contrary to their statements there are no non-universal terms contributing to this equation. Eq. (3) is an immediate consequence of eq.(2) by differentiating w.r.t. $`\mu `$ on both sides. Indeed, it, and more generally eq. (52) of , can also be obtained before taking the scaling limit by writing the factor $`N`$ coming from the differentiation as a sum of the number of vertices associated with each propagator in the appropriate $`\varphi ^3`$-graph, taking care of the endpoint contributions for the propagators. In particular, (3) is by definition not the 3-point function for branched polymers, but the 3-point function for branched polymers with one additional marked point. Let us finally comment on the remarks made in about baby universes. It is claimed that one has a situation similar to two-dimensional gravity where a typical surface will consist of a “parent” universe dressed with small baby universes that are connected to the parent by a bottleneck. The argument is based on a relation for the 1-point function<sup>1</sup><sup>1</sup>1For some reason Eq. (55) in misses a combinatorial factor $`N^{}`$, leading to erroneous conclusions.: $$G_N^1>𝑑N^{}N^{}G_N^{}G_{NN^{}}^1.$$ (12) However, this relation is only (approximately) valid when the entropy exponent $`\gamma <0`$. In fact, it is usually used to “derive” that $`\gamma <0`$ in 2d gravity. For branched polymers $`\gamma =1/2`$ and (12) is not valid (inserting $`G_N^1N^{\gamma 2}=N^{3/2}`$ clearly shows that (12) is violated for branched polymers (and for all $`\gamma >0`$)). The reason is that if there are numerous “baby universes” of all sizes, then (12) cannot hold, because the decomposition of a graph with one marked point into a graph with two marked points of size $`N^{}`$ and a graph with one marked point of size $`NN^{}`$ is not unique. A more refined treatment leads to the conclusion that $`\gamma >0`$ implies $`\gamma =1/2`$ under very general conditions. The scenario with baby universes of all sizes when $`\gamma >0`$ has been verified in numerous computer simulations (using, by the way, the canonical ensemble!). ## Acknowledgement J.A. and B.D. thank MaPhySTo – Centre for Mathematical Physics and Stochastic, funded by a grant from The Danish National Research Foundation – for support.
no-problem/9909/hep-ph9909309.html
ar5iv
text
# Chaos analyses in both phases of QED and QCD ## 1 Motivation The study of chaotic dynamics of classical field configurations in field theory finds its motivation in phenomenological applications as well as for the understanding of basic principles. The role of chaotic field dynamics for the confinement of quarks is a longstanding question. Here, we analyze the leading Lyapunov exponents of compact U(1) and of SU(2)-Yang-Mills field configurations on the lattice. The real-time evolution of the classical field equations was initialized from Euclidean equilibrium configurations created by quantum Monte Carlo simulations. This way we expect to see a coincidence between the strong coupling phase and the strength of chaotic behavior in lattice simulations. After reviewing essential definitions of the physical quantities describing chaos and their computation in lattice gauge theory we outline our method for the extraction of starting configurations of a three dimensional Hamiltonian dynamics from four dimensional Euclidean field configurations. Our results are then presented by showing an example of the exponential divergence of small initial distances between nearby field configurations. It is followed by a detailed study of the maximal Lyapunov exponent and average plaquette energy as a function of the coupling strength. ## 2 Classical chaotic dynamics Chaotic dynamics in general is characterized by the spectrum of Lyapunov exponents. These exponents, if they are positive, reflect an exponential divergence of initially adjacent configurations. In case of symmetries inherent in the Hamiltonian of the system there are corresponding zero values of these exponents. Finally negative exponents belong to irrelevant directions in the phase space: perturbation components in these directions die out exponentially. Pure gauge fields on the lattice show a characteristic Lyapunov spectrum consisting of one third of each kind of exponents . This fact reflects the elimination of longitudinal degrees of freedom of the gauge bosons. Assuming this general structure of the Lyapunov spectrum we investigate presently its magnitude only, namely the maximal value of the Lyapunov exponent, $`L_{\mathrm{max}}`$. The general definition of the Lyapunov exponent is based on a distance measure $`d(t)`$ in phase space, $$L:=\underset{t\mathrm{}}{lim}\underset{d(0)0}{lim}\frac{1}{t}\mathrm{ln}\frac{d(t)}{d(0)}.$$ (1) In case of conservative dynamics the sum of all Lyapunov exponents is zero according to Liouville’s theorem, $$L_i=0.$$ (2) We utilize the gauge invariant distance measure consisting of the local differences of energy densities between two field configurations on the lattice: $$d:=\frac{1}{N_P}_P\left|\mathrm{tr}U_P\mathrm{tr}U_P^{}\right|.$$ (3) Here the symbol $`_P`$ stands for the sum over all $`N_P`$ plaquettes, so this distance is bound in the interval $`(0,2N)`$ for the group SU(N). $`U_P`$ and $`U_P^{}`$ are the familiar plaquette variables, constructed from the basic link variables $`U_{x,i}`$, $$U_{x,i}=\mathrm{exp}\left(aA_{x,i}^cT^c\right),$$ (4) located on lattice links pointing from the position $`x=(x_1,x_2,x_3)`$ to $`x+ae_i`$. The generators of the group are $`T^c=ig\tau ^c/2`$ with $`\tau ^c`$ being the Pauli matrices in case of SU(2) and $`A_{x,i}^c`$ is the vector potential. The elementary plaquette variable is constructed for a plaquette with a corner at $`x`$ and lying in the $`ij`$-plane as $$U_{x,ij}=U_{x,i}U_{x+i,j}U_{x+j,i}^{}U_{x,j}^{}.$$ (5) It is related to the magnetic field strength $`B_{x,k}^c`$: $$U_{x,ij}=\mathrm{exp}\left(\epsilon _{ijk}aB_{x,k}^cT^c\right).$$ (6) The electric field strength $`E_{x,i}^c`$ is related to the canonically conjugate momentum $`P_{x,i}=\dot{U}_{x,i}`$ via $$E_{x,i}^c=\frac{2a}{g^3}\mathrm{tr}\left(T^c\dot{U}_{x,i}U_{x,i}^{}\right).$$ (7) ## 3 Initial states prepared by quantum Monte Carlo The Hamiltonian of the lattice gauge field system can be casted into the form $$H=\left[\frac{1}{2}P,P+\mathrm{\hspace{0.17em}1}\frac{1}{4}U,V\right].$$ (8) Here the scalar product between group elements stands for $`A,B=\frac{1}{2}\mathrm{tr}(AB^{})`$. The staple variable $`V`$ is a sum of triple products of elementary link variables closing a plaquette with the chosen link $`U`$. This way the Hamiltonian is formally written as a sum over link contributions and $`V`$ plays the role of the classical force acting on the link variable $`U`$. The naive equations of motion following from this Hamiltonian, however, have to be completed in order to fulfill the constraints $`U,U`$ $`=`$ $`1,`$ $`P,U`$ $`=`$ $`0.`$ (9) The following finite time step recursion formula: $`U^{}`$ $`=`$ $`U+dt(P^{}\epsilon U),`$ $`P^{}`$ $`=`$ $`P+dt(V\mu U+\epsilon P^{}),`$ (10) with the Lagrange multipliers $`\epsilon `$ $`=`$ $`U,P^{},`$ $`\mu `$ $`=`$ $`U,V+P^{},P^{},`$ (11) conserves the Noether charge belonging to the Gauss law, $$\mathrm{\Gamma }=\underset{+}{}PU^{}\underset{}{}U^{}P.$$ (12) Here the sums indicated by $`+`$ run over links starting from, and those by $``$ ending at a given site $`x`$, where the Noether charge $`\mathrm{\Gamma }`$ is defined. The above algorithm is written in an implicit form, but it can be casted into explicit steps, so no iteration is necessary . Initial conditions chosen randomly with a given average magnetic energy per plaquette have been investigated in past years. In the SU(2) case, a linear scaling of the maximal Lyapunov exponent with the total energy of the system has been established for different lattice sizes and coupling strengths . In the present study we prepare the initial field configurations from a standard four dimensional Euclidean Monte Carlo program on a $`12^3\times 4`$ lattice varying the inverse gauge coupling $`\beta `$ . We relate such four dimensional Euclidean lattice field configurations to Minkowskian momenta and fields for the three dimensional Hamiltonian simulation by the following approach: First we fix a time slice of the four dimensional lattice. We denote the link variables in the three dimensional sub-lattice by $`U^{}=U_i(x,t).`$ Then we build triple products on attached handles in the positive time direction, $`U^{\prime \prime }=U_4(x,t)U_i(x,t+a)U_4^{}(x+a,t)`$. We obtain the canonical variables of the Hamiltonian system by using $`P`$ $`=`$ $`(U^{\prime \prime }U^{})/dt,`$ $`U`$ $``$ $`(U^{\prime \prime }+U^{}).`$ (13) Finally $`U`$ is normalized to $`U,U=1`$. This definition constructs the momenta according to a simple definition of the timelike covariant derivative. The multiplication with the link variables in time direction can also be viewed as a gauge transformation to $`U_4(x,t)=1`$, i.e. $`A_0=0`$ Hamiltonian gauge. ## 4 Chaos and confinement We start the presentation of our results with a characteristic example of the time evolution of the distance between initially adjacent configurations. An initial state prepared by a standard four dimensional Monte Carlo simulation is evolved according to the classical Hamiltonian dynamics in real time. Afterwards this initial state is rotated locally by group elements which are chosen randomly near to the unity. The time evolution of this slightly rotated configuration is then pursued and finally the distance between these two evolutions is calculated at the corresponding times. A typical exponential rise of this distance followed by a saturation can be inspected in Fig. 1 from an example of U(1) gauge theory in the confinement phase and in the Coulomb phase. While the saturation is an artifact of the compact distance measure of the lattice, the exponential rise (the linear rise of the logarithm) can be used for the determination of the leading Lyapunov exponent. The naive determination and more sophisticated rescaling methods lead to the same result. The main result of the present study is the dependence of the leading Lyapunov exponent $`L_{\mathrm{max}}`$ on the inverse coupling strength $`\beta `$, displayed in Fig. 2 for a statistics of 100 independent U(1) configurations. As expected the strong coupling phase, where confinement of static sources has been established many years ago by proving the area law behavior for large Wilson loops, is more chaotic. The transition reflects the critical coupling to the Coulomb phase. Furthermore the maximal Lyapunov exponent scatters more pronounced than the average energy per plaquette. Fig. 3 shows the somewhat smoother transition of the energy per plaquette as a function of the inverse coupling strength. Fig. 4 depicts the correlation of the Lyapunov exponents and the plaquette energies for 100 U(1) configurations. The blank area is indicative of the transition point being presumable of first order. We now turn to a comparison of expectation values of U(1) and SU(2) theory. Fig. 5 exhibits the averaged leading Lyapunov exponent between the strong and the weak coupling regime. The smoother fall-off of the SU(2) Lyapunov exponent reflects the second order of the finite temperature transition to a Debye screened phase of free quarks. Fig. 6 compares the averaged plaquette energies of both gauge theories signaling the different order of their phase transitions. Fig. 7 shows the energy dependence of the Lyapunov exponents for both theories. One observes an approximately linear relation for the SU(2) case while a quadratic relation is suggested for the U(1) theory in the weak coupling regime. From scaling arguments one expects a functional relationship between the Lyapunov exponent and the energy $$L(a)a^{k1}E^k(a),$$ (14) with the exponent $`k`$ being crucial for the continuum limit of the classical field theory. A value of $`k<1`$ leads to a divergent Lyapunov exponent, while $`k>1`$ yields a vanishing $`L`$ in the continuum. The case $`k=1`$ is special leading to a finite non-zero Lyapunov exponent. Our analysis of the scaling relation (14) gives evidence, that the classical compact U(1) lattice gauge theory has $`k2`$ and with $`L(a)0`$ a regular continuum theory. The non-Abelian SU(2) lattice gauge theory signals $`k1`$ and stays chaotic approaching the continuum. Summarizing we investigated the classical chaotic dynamics of U(1) and SU(2) lattice gauge field configurations prepared by quantum Monte Carlo simulation. The maximal Lyapunov exponent shows a pronounced transition as a function of the coupling strength. Both for QED and QCD we find that configurations in the strong coupling phase are substantially more chaotic than in the weak coupling regime. Our results demonstrate that chaos is present when particles are confined, but it persists partly also into the Coulomb and quark-gluon-plasma phase. So far the situation for the gauge fields. An independent analysis of the fermion fields yields compatible results with respect to quantum chaos . ## Acknowledgments This work has been supported by the Hungarian National Scientific Fund under the project OTKA T019700 as well as by the Joint American Hungarian Scientific Fund TeT MAKA 649.
no-problem/9909/astro-ph9909482.html
ar5iv
text
# Hypercat : A Database for extragalactic astronomy ## 1. Present content of Hypercat Hypercat maintains catalogues of data collected in the literature or at the telescope, concerning the photometry, kinematics and spectrophotometry of galaxies. Some catalogues contain “global” properties as total magnitude and other spatially resolved data. They give basic data to study the scaling relations of galaxies, as for instance the Fundamental Plane, and contain all the information needed to make the necessary corrections and normalizations in order to compare measurements of galaxies at different redshifts. The catalogues of global properties are: * *The catalogue of central velocity dispersions* (for galaxies and globular clusters) has been presented in a preliminary form in Prugniel & Simien 1996. The present version gives 5470 measurements published in 352 references for 2335 objects. Hypercat allows one to retrieve the published measurements as well as homogenized (ie. corrected for systematics effects between datasets) and aperture corrected data. * *The catalogue of magnitudes and colours* (published in Prugniel & Heraudeau, 1998) presents the photometry of 7463 galaxies in the U to I bands. The global parameters, asymptotic magnitude, surface brightness, photometric type (ie. shape of the growth curve), colour and colour gradients were computed from circular aperture photometry. * *The catalogue of Mg2 index* (published in Golev & Prugniel, 1998) have 3712 measurements for 1416 galaxies. Aperture corrections and homogenization are available. * *The maximum velocity of rotation* is available for the stellar rotation of 720 galaxies (mostly early-type). They represents 1491 measurements taken in 224 dataset. A bibliographical catalogue of spatially resolved kinematics (Prugniel et al. 1998) indexes 6214 measurements for 2677 galaxies. In addition, other parameters, like the recession velocity, galactic absorption or environment parameters, are automatically extracted from other databases and Hypercat provides procedures to compute derived parameters. However, the present understanding of the scaling relations becomes limited by the quality of the parameterization restricted to these “global” values. For instance, in Prugniel et al. (1996) we have shown that a more detailed description, including rotation and non-homology of the structure, must be taken into account when studying the fundamental plane of early-type galaxies. For this reason, Hypercat has also embarked in the gathering and distribution of spatially resolved data such as *Multi-aperture photometry* for 20537 galaxies (222045 measurements), *Kinematic profiles* (ie. “rotation curve”, velocity dispersion profiles…) for 1761 galaxies (73520 measurements) and *Catalogue of line strength profiles* (currently under development). An original aspect in the development of Hypercat is that the different catalogues are *separately maintained in different sites*. The database is automatically updated by procedures running over the network at time of low-traffic. At present, observatories participating to the project are: Capodimonte (Napoli), Sternberg (Moscow), Brera (Milano), University of Sofia and Lyon. ## 2. Current axes of development: The FITS archive and Data Mining The distribution ,over several astronomers, of the work to maintain this database makes the individual charge affordable and we can foresee that we will be able to continue this part of the project. In addition, as Hypercat becomes known in the community, people begin to send us their data in a form making them easy to implement. The usual approach when new measurements are needed is to make new observations. This is justified when the past observations do not have the required quality, but archived observations offer in many cases a serious alternative *if* the data can be accessed easily and have a good enough description. We started in 1998 the construction of a FITS archive in Hypercat (HFA) coupled to data-mining procedures aimed at distributing data at any desired stage of processing or even measurements. At present HFA contains 29366 FITS files (for 14631 galaxies) mainly from the our medium resolution spectra of galaxies (Golev et al 1998 for details) and ESO-LV survey (Lauberts et al. 1989). In the near future, we will archive other datasets, and in particular we call for contributions from astronomers outside our group which may be interested to distribute their data through this channel. ## References Golev, V., Prugniel, Ph., 1998 A&AS 132, 255 Lauberts, A., Valentijn, E. A., 1989, ESO (1989), 0 Maubon, G., Prugniel, Ph., Golev, V., Simien, F., 1998 *La lettre de l’OHP* 18 Prugniel, Ph., Simien, F., 1996 A&A 309, 749 Prugniel, Ph. Heraudeau, Ph., 1998 A&AS 128, 299
no-problem/9909/astro-ph9909056.html
ar5iv
text
# Tracing interactions in HCGs through the HI1 ## 1. Introduction and data HCGs constitute a specially appropriate laboratory for the study of interactions among galaxies. Our aim is to understand how star formation, morphology and dynamics are affected by the environmental conditions, and our approach is multiwavelength (e.g. Yun et al 1997, Verdes-Montenegro et al 1998). Since the atomic hydrogen is a sensitive tracer of tidal interaction (e.g. Hibbard & Van Gorkom 1996, Hibbard 1999), we have performed VLA mapping of a large number of HCGs which added to the previously published ones (Williams & Van Gorkom 1988 - WVG88, Williams, Mac Mahon and Van Gorkom 1991 - WMVG 91, Williams & Van Gorkom 1995 - WVG95, Williams et al 1997 - W97) and lead to a total sample of 16 groups. In two cases (HCG 31 and HCG 79) we have obtained higher spatial and spectral resolution data which have helped to improve our understanding of these systems. All the HI data herein are analyzed together for the 1st time and provide a sample to test previously proposed models of compact groups as well as a detailed database for hydrodynamical simulations. HCG 2, 18 and 54 are considered apart since the first one is a triplet, and the others likely have less than 4 galaxies. ## 2. HI content and distribution We have measured the HI mass associated with the main body of the galaxies as well as that found in gaseous tidal features, except for HCG 26 (WVG88) and HCG 49, where the emission is found in a large envelope containing all group members and therefore cannot be separated. The HI content of the spiral and lenticular galaxies has been compared with the predicted one for their optical luminosity (Fig. 1a) based on the relationships obtained by Haynes & Giovanelli (1984) for a sample of isolated galaxies. We note that HCG 33 has only a spiral member, which has a bright star on the top, hence an accurate determination of L<sub>B</sub> cannot be obtained. Disks with a normal HI content are found but most galaxies have significant amounts of missing gas, and this is evident in spite of the intrinsic dispersion in the HI content ($`\sigma `$ $``$ 0.25 in log). None of the lenticular galaxies (empty circles) in this sample have been detected, but since they usually show a wide range of HI content, we have checked that their inclusion does not affect significantly our conclusions. The total mass expected for a group is also plotted, and has been calculated as the sum of the expected values for each member. The use of the summed L<sub>B</sub> to predict a total HI mass would produce an artificial systematic deficiency in the log of 0.2 due to the non-linearity of the M(HI) - L<sub>B</sub> relation. We have plotted in Fig. 1b the total detected HI mass, i.e., including gas external to the galaxies. Comparison of both panels tell us whether the missing HI is located in tidal features or just disappeared. This together with the morphology and kinematics of the HI show the existence of different distributions that we describe next. ### 2.1. Most of the HI mass within the galaxies HCG 23, HCG 33 and HCG 88 have more than 90% of the HI mass associated to the disks and show little or no signs of interaction (WVG95, W97). HCG 88 constitutes however a very interesting case as it might be considered a filament seen in projection along the line of view. The low velocity dispersion of this quartet ($``$ 30 km/s) together with its high degree of isolation (De Carvalho et al 1997) contradict the chain alignment hypothesis. Consequently we think that HCG 88 is a good example of a physically dense group in a very early stage of interaction. ### 2.2. Significant HI mass in tidal features HCG 16, 31 and 96 show most of the missing gas in numerous tidal features, indicating that multiple interactions are taking place. In HCG 96, 30% of the HI mass is located in two intense tidal tails plus a bridge. H96b is a bright elliptical with an optically detached core (Verdes-Montenegro et al 1997) plus an extended envelope brighter in the direction of the tidal tails (Fig. 2). Tidal features were also reported for HCG 16 from D array data by W97 and our new C array data resolve the emission indicating that a 40% of HI is distributed in two intense bridges and two tails. Consequently there is no room for diffuse X-ray emission in this group. A result that is inconsistent with the conclusion reached by Mamon et al (this conference). The most extreme case is that of HCG 31 where 60% of the gas is located in 4 tidal tails and 1 bridge (WMVG91, Del Olmo et al 1999). HCG 79, the densest group in the HCG catalog, could be included in this category since its only spiral member shows a tidal tail smoothly connected in velocity with the galactic disk. Considering the two anemic lenticulars, the group is strongly HI deficient and might be better placed in the next category, little HI coupled with the galaxies. The optical diffuse envelope that contains the whole group (Sulentic & Lorre 1983) suggests that this configuration is the product of a much older perturbation; yet, the HI morphology indicates a subsequent and more recent interaction, i.e., tidal tail, that has not yet perturbed the HI disk of the sole spiral member of the group. The presence of the diffuse optical envelope makes it more plausible that the anemic lenticular members were once spiral galaxies stripped of their gas during the these earlier interactions. ### 2.3. Little HI coupled with the galaxies #### HI deficient groups. HCG 40, HCG 44 (WMVG91), HCG 92 (W99) and HCG 95 (Huchtmeier et al 1999) have an HI deficiency of 70 to 90%, as found by Williams and Rood (1987) and Huchtmeier (1997) from single dish observations. Our maps show that this deficiency is due to all galaxies in the group (Fig. 3a). In the case of HCG 92 the 8 $`\times `$ 10<sup>9</sup>M of HI detected are fully located in several clouds and tidal features (W99), so we were not able to discriminate from which members the gas was missing. However the multiplicity of features plus the small amount of detected HI strongly suggest that the missing gas was related to most if not all galaxies in the group. In Fig. 3b we show the most striking mapped case, HCG 40, where only one half of the disk of H40c is detected in HI plus a small cloud to the eastern side of H40d. #### Single HI cloud. The HI towards HCG 26 (WVG88) and HCG 49 constitutes a single envelope with a velocity gradient decoupled from the individual galaxies. In the case of HCG 49 the cloud is round-shaped, with a velocity gradient of $``$ 250 km s<sup>-1</sup> containing 4 well differentiated galaxies with a velocity dispersion of 34 km s<sup>-1</sup> and a total optical diameter of 35 kpc. This cloud constitutes a challenge for dynamical models due to the coexistence of separated optical morphologies and a global common kinematics. ## 3. Discussion HCG galaxies respond to their environment at different levels: 60% have redistributed the atomic gas and 65% have a lower than expected content given the intrinsic dispersion of this quantity. Since the gas is missing from all galaxies in the group it implies that all of them are interacting with each other and/or with an intragroup medium. The total number of groups perturbed in one of these two ways amounts to 77%. Both kinds of perturbations do not coexist in all cases: we find HI deficient groups for which most HI is found in tails and external clouds (e.g. HCG 92) while the groups embedded in a single HI cloud have a normal HI content. The evolution from mild interactions to the generation of multiple tidal features can be well understood as an extrapolation of interacting pairs. Formation of a single cloud with a coherent kinematics implies a larger degree of evolution, and it might be related to the fact that these groups are mostly composed of dwarf galaxies (around 15 kpc in diameter) which tend to have more extended HI disks, as suggested by Bosma (priv. comm.). HCG 31 is a promising candidate to form a large envelope due to its present HI distribution and kinematics together with the small size of its galaxies. From our data, we point out two possible causes for the observed HI deficiency. One is the presence of a first ranked elliptical which could be the case for HCG 40 and 95, since they constitute a deep potential well that can accrete gas (Vílchez & Iglesias-Páramo 1998). The second one can be the presence of hot gas, as in HCG 92 where tidal features contain all the detected gas which anticorrelates with a ridge of X-ray emission (Pietsch et al 1997). The available X-ray data for our HI sample are mostly upper limits therefore analyzes of the X-ray correlations are limited. We have found a correlation between HI deficiency and the groups velocity dispersion. The velocity dispersion of a different but larger sample of HCGs correlates with X-ray emission (Ponman et al 1995). This is in the sense that the deficiency increases with the X-ray emission. Gas accretion from giant ellipticals, or a hot medium may compete in the production of HI deficient groups. These factors will be studied further in a subsequent paper. ##### Acknowledgments. LVM acknowledges interesting discussions with J. Sulentic, R. Sancisi, E. Athanassoula and A. Bosma. LV–M, AO and JP are partially supported by DGICYT (Spain) Grant PB96-0921 and Junta de Andalucía (Spain). ## References De Carvalho, R. R., Ribeiro, A. L. B., Capelato, H. V. & Zepf, S. E. 1997, ApJSS 110, 1 Del Olmo, A., Verdes-Montenegro, L., Yun, M. S. & Perea, J. 1999, in prep. Haynes, M. P. & Giovanelli, R. 1984 AJ, 89, 758 Hibbard, J. E. 1999, 15th meeting of the Institute d’Astrophysique de Paris, July 9-13 1999, “Galaxy Dynamics: from the Early Universe to the Present” Hibbard, J. E & Van Gorkom, J. H. 1996, AJ, 111, 655 Huchtmeier, W. K. 1997, A&A, 325, 473 Huchtmeier, W. K., Verdes-Montenegro, L., Yun, M. S., Del Olmo, A., & Perea, J. 1999, this volume. Pietsch, W., Trinchieri, G., Arp, H. & Sulentic, J.W. 1997, A&A 322, 89 Ponman, T. J., Bourner, P. D., Ebeling, H. & Bohringer, H. 1996, MNRAS 283, 690 Sulentic, J. W. & Lorre, J. J. 1983, AA 120, 36 Verdes-Montenegro, L., Del Olmo, A., Perea, J., Athanassoula, E., Márquez, I., & Augarde, R. 1997, AA 321, 409 Verdes-Montenegro, L., Yun, M. S., Perea, J., Del Olmo, A. & Ho, P. T. P. ApJ 1998, 497, 89 Vílchez, J. M. & Iglesias-Páramo, J. 1998, ApJ 506, L101 Williams, B. A., McMahon, P. M., & Van Gorkom, J. H. 1991, AJ, 101, 1957 (WMVG91) Williams, B. A. & Rood, H. J. 1987, ApJSS 63, 265 Williams, B. A. & Van Gorkom, J. 1988, AJ 95, 352 (WVG88) Williams, B. A., & Van Gorkom, J. H. 1995 in Groups of galaxies, ASP Conference Series, Vol. 70, 77 (WVG95) Williams, B.A., Van Gorkom, J.H., Yun, M.S., & Verdes-Montenegro, L. in Galaxy Interactions at Low and High Redshift, 1997 (W97) Williams, B.A., Yun, M.S. & Verdes-Montenegro, L. 1999, in prep. (W99) Yun, M.S., Verdes-Montenegro, L., Del Olmo, A., Perea, J. 1997, ApJ. 475, L21
no-problem/9909/hep-lat9909146.html
ar5iv
text
# QCD on 𝛼-Clusters Talk presented by N. Eicker. ## 1 Introduction The urgent need for cheap sustained compute power for lattice QCD (LQCD) provides a strong motivation to fathom the potential of PC or workstation clusters. It is not a long time ago that PCs and workstations have become both speedy and cheap enough to render their clustering in commodity networks economical, in view of local performance, scalability and total system size. Moreover, to render clusters efficiently one needs open source operating systems such as Linux. The apparent success of Beowulf clusters and the tremendous peak compute power of Alpha processors as realized in the Avalon cluster immediately have called the attention of the lattice community. We are going to investigate two different cluster approaches, both based on Compaq Alpha processors: One system (NICSE-TS) we have designed and benchmarked, using state-of-the-art iterative solver codes, is a four-node cluster of 533 MHz 21164 EV56 Alpha processors, installed as a test-system at the John von Neumann-Institut für Computing in Jülich/Germany and operated under Linux. Since QCD involves only nearest-neighbor interaction, a mesh based connectivity appeared to be the natural parallel architecture in order to handle the ensuing interprocessor communication between the nodes. Our second test-cluster (ALiCE-TS) has been designed with respect to the experiences gained by NICSE-TS. Besides the shift to 21264 EV6 Alpha processors we are using Myrinet, a Gbit network. This promises the interprocessor connectivity to be fast enough to compute LQCD on Alpha clusters. As Myrinet provides a multi-stage crossbar, we have given up the former mesh approach. This test system again consists of four workstations. We will show that ALiCE-TS is superior to the “cheap” NICSE-TS in terms of price/performance ratio by nearly a factor of two. The paper is organized as follows: in Section 2 we give the specifications for the two variant clusters tested, Section 3 describes our benchmark codes and contains some results and in Section 4, we give price/performance ratios. ## 2 The Testbeds The benchmark systems consist each of four single processor nodes with two different generations of Alpha processors. The connectivity is fast Ethernet and Myrinet, respectively. ### 2.1 NICSE-TS NICSE-TS is a four-node system with fast Ethernet connectivity. The system is located at NIC, FZ-Jülich. The nodes are very similar to the Avalon-nodes, i.e. they contain: * 533 MHz 21164A Alpha microprocessors, 2 MB 3<sup>rd</sup> level cache, Samsung Alpha-PC 164UX motherboards * ECC SDRAM DIMMs (256 MB per node) * D-Link DFE 500 TX Ethernet cards * MPI based on MPIch The main difference to Avalon is the network-setup. Where Avalon has an all-to-all network using switches, the NICSE-TS uses a 2-D torus. Thus we need four Ethernet cards per node where Avalon only employs one. On the other hand we do not need any switch. We expect, that the network performance scales to a large number of nodes for nearest neighbor communication. All-to-all communication can be achieved by the routing capabilities of the Linux kernel. ### 2.2 ALiCE-TS ALiCE-TS is a four-node cluster with switched Myrinet connectivity. This system is hosted at Wuppertal University. It includes: * 466 MHz 21264 Alpha microprocessors, 2 MB 2<sup>nd</sup> level cache, Compaq DS10 motherboards * ECC SDRAM DIMMs (128 MB per node) * 64-bit 33MHz Myrinet-SAN/PCI interface * MPI based on Myrinet GM library ALiCE-TS has been purchased as prototype system for the design of the 128 node Wuppertal Alpha-Linux-Cluster Engine (ALiCE). ## 3 QCD Benchmarks and Results The computational key problem of LQCD is the—very often repeated—inversion of the Dirac matrix. It has been shown in , that such systems are most efficiently solved by Krylov subspace methods like BiCGStab. State-of-the-art is the application of parallel local lexicographic preconditioning within BiCGStab . The results of this paper’s benchmarks are based on two codes: BiCGStab is a sparse matrix Krylov solver with regular memory access, where computation and communication proceed in an alternating fashion. In this case, DMA capabilities of the communication cards are not exploited. SSOR is the same solver but with local-lexicographic SSOR preconditioning. The SSOR process leads to rather irregular memory access and extensive integer computations. This code is very sensitive to the memory-to-cache bandwidth. Since communication overlaps with computation, DMA can be exploited. Both codes are written in C and compiled under the GNU egcs-1.1.2 C compiler. Timing was done with MPI\_Wtime. For both codes there exist two versions: 1. To test single node performance, the code runs without communication operations, otherwise carrying out exactly the same operations as the following parallel version. 2. On the 4-node test machines, the physical system is laid out in a 2-D fashion, consequently, communication is carried out along two dimensions, namely $`z`$\- and $`t`$-directions. Assuming $`N_{proc}=N_z\times N_t`$ processors, the global lattice is divided in $`N_t`$ slides in $`t`$-direction where every slide consists of $`N_z`$ slides in $`z`$-direction. In the sequel, we are going to employ a local lattice of size $`16^2\times 4\times 8`$ on $`2\times 2`$ processors such that we emulate a realistic $`16^3\times 32`$ system on $`4\times 4`$ processors. ### 3.1 Single-node results The basic operation in the iterative solution of the Dirac matrix is the product of a SU(3) matrix with two color vectors. The average number of flops per matrix vector operation is $`N_{flop}=171`$. The number of complex words to get from memory in order to carry out this process is $`N_{cwords}=(9+2\times 12)`$ leading to $`N_{bytes}=528`$ bytes for double precision arithmetics. Therefore we expect the maximal performance that can be reached for a single node to be limited by $$P_{max}=\frac{B}{N_{byte}}N_{flop}=\{\begin{array}{cc}97\hfill & \text{MFlops (UX)}\hfill \\ 420\hfill & \text{MFlops (DS10)}\hfill \end{array}$$ in a steady state of computation and data flow, given a maximal memory bandwidth of 300 and 1300 MB/sec, respectively. Note that our problem size is chosen to be larger than the available caches. The real performances will be smaller due to BLAS-1 and BLAS-2 operations within BiCGStab. Table 1 shows that, on the UX board, the performance of BiCGStab comes close to the limiting value given, while the DS10 performance deviates by more than a factor of 2 from the estimate. The local lattice size presumably is too small to lead to saturation of the bandwidth for the DS10<sup>1</sup><sup>1</sup>1The STREAMS benchmark gives a real bandwidth of 580 MB/sec instead of the theoretical value of 1300. This difference explains the factor of two.. However, as a main result, we find that the improvement in performance going from the 533 MHz Alpha 21164 to the 466 MHz Alpha 21264 chip is around a factor of two, using identical codes. Furthermore, the SSOR preconditioner with irregular memory access is, as has been expected, less effective than the simple BiCGStab. ### 3.2 Four-node results The impact of interprocessor communication for both connectivities is determined on the four-node testbed systems. As shown in Table 2, the results for the fast Ethernet mesh (denoted by UX) are disappointing. The performance of both codes, SSOR and BiCGStab, is reduced by more than a factor of two compared to the single node result. The main degradations are due to the massive protocol overhead forcing the processor into administration instead of computation. User-level networking interfaces promise to circumvent this problem in the near future, but are currently not available for our configuration. It is satisfying to see, by comparing Tables 1 and 2, that the Alpha 21264-Myrinet system (denoted by DS10) with Myrinet GM library has a communication loss in the range of only 10 to 20 %. We expect a further considerable improvement of these results by employing software with reduced protocol stack like SCore or ParaStation . ## 4 Conclusion Comparing price/performance ratios we arrive at the following estimates: An Alpha 21164 system, connected in a fast Ethernet mesh, would—as an optimistic estimate—lead to a 4 GFlops device (sustained) for 128 processors with a price of about 80 k$ per GFlops. A 128 processor DS10 Alpha-Linux-Cluster connected by Myrinet, however, promises to reduce costs to 40 – 50 k$ per GFlops (estimated from list prices as of July 1999) and is therefore in the range of state-of-the-art dedicated QCD machines.
no-problem/9909/hep-lat9909154.html
ar5iv
text
# September 1999 Lattice QCD with Domain-Wall Fermions Talk presented by T. Izubuchi at Lattice 99, Pisa, Italy. ## 1 Introduction Domain-wall QCD (DWQCD) is considered to have good properties such as exact chiral symmetry without doublers, no $`O(a)`$ scaling violation and the existence of conserved axial current. There exist several pilot studies, which seems to support these superior properties. We study DWQCD in the quenched approximation. First we investigate the pion mass and the explicit breaking term of the axial Ward-Takahashi identity to confirm the existence of the chiral zero mode at zero current quark mass. Next we calculate the pion decay constant $`f_\pi `$ from both the conserved axial current and the local axial current, using the perturbative renormalization factor for the latter, in order to check the reliability of the lattice perturbation theory. Finally we explore negative $`m_f`$ region to examine the existence of the parity broken phase predicted in . ## 2 Chiral symmetry The fermion action is identical to the original one, with domain wall height $`M`$, bare quark mass $`m_f`$ and the extent of the 5th dimension $`N_s`$. We employ 10–30 gauge configurations, generated by the plaquette action at $`\beta =6.0`$ ($`a^12`$ GeV) on $`16^3\times 32\times N_s`$ lattices. The unit wall source without gauge fixing is used for quark propagators. The mean-field estimate for the optimal value of $`M`$, $`M=1+4(1u)`$ with $`u=1/(8K_c)`$, gives $`M=1.819`$, from $`K_c=0.1572`$ for the Wilson fermion at $`\beta =6.0`$. First we investigate the existence of the chiral zero mode in the chiral limit of the model, $`m_f0`$ and $`N_s\mathrm{}`$. The pion mass squared is plotted as a function of $`m_fa`$ in Fig.1. Since the linearity of $`M_\pi ^2`$ in $`m_f`$ is well satisfied, we linearly extrapolate it to $`m_f=0`$ for each $`N_s`$. We also evaluate a critical quark mass $`m_c(N_s,M)`$ at which the pion mass squared vanishes. In Fig. 2, $`M_\pi ^2`$ is plotted as a function of $`N_s`$ for $`m_fa=`$0.025, 0.050, 0.075 and $`0`$. Extrapolated values of $`M_\pi ^2`$ at $`m_f=0`$ seem to vanish exponentially in $`N_s`$, while $`M_\pi ^2`$ at finite $`m_f`$ remains non-zero. For $`N_s=10`$, $`M_\pi ^2(m_f=0)`$ is already as small as that for the NG pion of the KS fermion at the same $`\beta `$ for the same spatial lattice size. Furthermore $`m_c(N_s,M=1.819)`$ and the WI-mass, defined by $`J_5^q|P/P|P`$, also decrease exponentially in $`N_s`$, as shown in Fig. 3. All these facts indicate that the chiral symmetry is restored for $`m_f0`$ and $`N_s\mathrm{}`$ at $`\beta =6.0`$. The lattice scale $`a`$ is set by the $`\rho `$ meson mass. In Fig. 4, $`M_\rho a`$ is plotted as a function of $`N_s`$. Circles show linearly extrapolated values to $`m_f=0`$, while squares show those to $`m_f=m_c`$. For small $`N_s(=4)`$ the two ways of extrapolation give different results, while two extrapolations are almost identical to each other for $`N_s=10`$. We also find that the dependence of $`M_\rho `$ on the domain-wall height is mild for $`M=`$1.7–1.9 and $`N_s=10`$: within statistical errors the extrapolated values are consistent with each other. ## 3 Pion decay constant $`f_\pi `$ The pion decay constant is defined as $`m_\pi f_\pi /Z_A=0|A_4|\pi `$, which is obtained from correlation functions of pseudo scalar density $`P(t)`$ and axial current $`A_\mu (t)`$ at zero momentum: $`X(t)Y(0)=C_{XY}G(t)`$ with $`G(t)=\mathrm{exp}(M_\pi t)/(2M_\pi V_s)`$ for $`X,Y=P,A_4`$. For the renormalization factor for the axial current $`Z_A`$, we take unity for the conserved current and use the value from the mean-field improved perturbation theory for the local current. In Fig. 5, $`f_\pi `$ is shown as a function of $`m_f`$. Circles are obtained from $`C_{AP}`$ for the conserved axial current, while squares from $`C_{AP}`$ and diamonds from $`C_{AA}`$ for the local axial current. The triangle is the experimental value. For local current, filled(open) symbols represent values with(without) perturbative corrections. Three different estimates reasonably agree for all $`m_f`$ within current statistics, if 1-loop corrections are included. It is also noted that the value of $`f_\pi `$ from the conserved current, linearly extrapolated at $`m_f=0`$, is close to the experimental value. ## 4 Parity broken phase For finite $`N_s`$, the parity broken phase may exist in negative $`m_f`$ regions. As $`N_s`$ increases, the broken phase shrinks rapidly, and the phase boundary, where the pion mass vanishes, converges to $`m_f=0`$. To examine this parity broken picture in DWQCD, we calculate pion masses at $`m_fa=0.120,0.100,0.080`$ for $`N_s=4`$ and $`M=1.819`$. The pion propagators at these parameters form peculiar shapes similar to “W” character, which has been often observed for the Wilson fermion near or in the parity broken phase. Pion mass squared as a function of $`m_f`$ is shown in Fig. 6. Extrapolations of $`M_\pi ^2`$ to zero both from positive and negative $`m_f`$ (two largest $`m_f`$ are used for negative $`m_f`$) indicate that a parity broken phase may exist around $`m_fa0.03`$ at this parameter. Needless to say, more high statistics and variation of parameters are necessary for the definite conclusion. ## 5 Conclusions and discussions At $`\beta =6.0`$ we have several indications that the chiral symmetry is restored at $`N_s\mathrm{}`$. We see that all three different estimates for $`f_\pi `$ are consistent with each other. This shows the mean-field improved perturbation theory works well at $`\beta =6.0`$ in DWQCD. Although the value of $`f_\pi `$ at $`\beta =6.0`$ turns out to be compatible to the experimental one, detailed scaling studies at different $`\beta `$’s are needed before making the firm statement. The chiral symmetry in DWQCD, however, may fail to recover on the coarse lattice. If this is true, the scaling studies of DWQCD become rather difficult. We examine negative $`m_f`$ to see the parity broken phase in DWQCD. The result seems consistent with the parity broken picture, though further confirmations for this are definitely required. This work is supported in part by the Grants-in-Aid of Ministry of Education (Nos. 02375, 02373). TI and YT are JSPS Research Fellows.
no-problem/9909/quant-ph9909042.html
ar5iv
text
# REFERENCES Position-momentum local realism violation of the Hardy type Bernard Yurke<sup>1</sup>, Mark Hillery<sup>2</sup>, and David Stoler<sup>1</sup> <sup>1</sup>Bell Laboratories, Lucent Technologies, Murray Hill, NJ 07974 <sup>2</sup> Hunter College of the City University of New York, 695 Park Ave. New York, NY 10021 We show that it is, in principle, possible to perform local realism violating experiments of the Hardy type in which only position and momentum measurements are made on two particles emanating from a common source. In the optical domain, homodyne detection of the in-phase and out-of-phase amplitude components of an electromagnetic field is analogous to position and momentum measurement. Hence, local realism violations of the Hardy type are possible in optical systems employing only homodyne detection. PACS: 03.65.Bz file:stoler/Local-realism/xpprl971106.tex As an example to support their contention that quantum mechanics is incomplete Einstein, Podolsky, and Rosen (EPR) exhibited a quantum mechanical wave function, describing two particles emitted from a common source, in which the positions and the momenta of the two particles were strongly correlated. This wave function described the situation in which the measurement of the position of one of the particles would allow one to predict with complete certainty the position of the other particle and the measurement of the momentum of one of the particles would allow one to predict with complete certainty the momentum of the other particle. Because of these strong correlations even when the particles were well-separated it was argued that each of the particles must have a definite position and a definite momentum even though a quantum mechanical wave function does not simultaneously ascribe a definite position and a definite momentum to a particle. Therefore, it was argued that quantum mechanics is incomplete. It was hoped that in the future a complete theory could be devised in which a definite position and definite momentum would be ascribed to each particle. In 1992 the EPR Gedanken experiment was actually carried out as a quantum optics experiment in which electromagnetic field analogues of position and momentum were measured on correlated photon states generated by parametric down-conversion . The analogues of the position and momentum were the two quadrature amplitudes of the electromagnetic field measured via homodyne detectors . A quantum mechanical state having the properties of the state employed by EPR had been realized. However, since the work of Bell it has been known that a complete theory, of the type EPR hoped for, capable of making the same predictions as quantum mechanics, does not exist . A variety of experiments, referred to as local realism violating experiments, have been proposed and performed, demonstrating that quantum mechanics is inherently at odds with classical notions about how effects propagate. Most striking among the proposals are the “one event” local realism violating experiments devised by Greenberger Horne and Zeilinger (GHZ) and by Hardy . The Bell, GHZ, and Hardy experiments that have been proposed generally measure spin components or count particles, i.e., they employ observables that have a discrete spectrum. There are, however, some examples in which continuous observables or a mixture of discrete and continuous observables have been employed . In fact, Bell showed that position and momentum measurements on a pair of particles in a state for which the Wigner function has negative regions can give rise to local realism violating effects of the Clauser, Holt, Horne, and Shimony type . Here we show that local realism violating effects of the Hardy type can be obtained through position and momentum measurements on a pair of particles prepared in the appropriate state. Given that homodyne detection measurements of the two quadrature amplitude components of an electromagnetic field provide an optical analogue to position and momentum measurements, an optical experiment exhibiting local realism violations of the Hardy type can be devised, provided the appropriate entangled state can be generated. A local realism constraint on the positions and momenta measured for a pair of particles emitted from a common source can be arrived at by regarding the detectors as responding to messages emitted by the source . The source does not know, ahead of time, whether a position or a momentum measurement will be performed by a given detector. Hence, the instruction set emitted by the source must tell the detectors what to do in either case. The instruction sets are conveniently labeled via the array $`(\alpha _{x1},\alpha _{x2};\alpha _{p1},\alpha _{p2})`$ where $`\alpha _{xi}`$ and $`\alpha _{pi}`$ are members of the set $`\{+,\}`$. Here $`\alpha _{xi}`$ denotes whether detector $`i`$, measuring the position $`x_i`$ of particle $`i`$, will report the position to be positive ($`\alpha _{xi}=+`$) or negative ($`\alpha _{xi}=`$). Similarly, $`\alpha _{pi}`$ denotes whether detector $`i`$, measuring the momentum $`p_i`$ of particle $`i`$, will report the momentum to be positive ($`\alpha _{pi}=+`$) or negative ($`\alpha _{pi}=`$). The probability that a message of the form $`(\alpha _{x1},\alpha _{x2};\alpha _{p1},\alpha _{p2})`$ will be denoted as $`P(\alpha _{x1},\alpha _{x2};\alpha _{p1},\alpha _{p2})`$. Let $`P_{\beta _1\beta _2}(\alpha _{\beta _11},\alpha _{\beta _22})`$, where $`\beta _i\{x,p\}`$, is the probability that detector $`1`$ measuring $`\beta _1`$ reports $`\alpha _{\beta _11}`$ while detector $`2`$ measuring $`\beta _2`$ reports $`\alpha _{\beta _22}`$. For example, $`P_{xp}(+,)`$ denotes the joint probability that detector 1 measuring position will report a positive position while detector 2 measuring momentum will report a negative momentum. In terms of the message probabilities, $`P_{pp}(,)`$ is given by $`P_{pp}(,)`$ $`=`$ $`P(+,+;,)+P(+,;,)`$ (2) $`+P(,+;,)+P(,;,).`$ The joint probabilities provide the following bounds on the message probabilities $`P(+,+;,)`$ $``$ $`P_{xx}(+,+),`$ (3) $`P(+,;,)`$ $``$ $`P_{px}(,),`$ (4) $`P(,+;,)`$ $``$ $`P_{xp}(,),`$ (5) and $$P(,;,)\mathrm{min}\{P_{xp}(,),P_{px}(,)\}.$$ (6) Applying these inequalities to Eq. (2) yields the following local realism constraint on the joint probabilities: $`P_{pp}(,)`$ $``$ $`P_{xx}(+,+)+P_{px}(,)+P_{xp}(,)`$ (8) $`+\mathrm{min}\{P_{xp}(,),P_{px}(,)\}.`$ If it is rigorously known that the probabilities on the right-hand side of the inequality (8) are all zero, $$P_{xx}(+,+)=P_{xp}(,)=P_{px}(,)=0,$$ (9) then it follows, according to local realism, that $`P_{pp}(,)`$ is rigorously zero. Thus, the appearance of a single event in which both particles have negative momentum would violate local realism. This situation, referred to as “one event” local realism violation, of course, cannot be achieved in practice because with a finite amount of data or the presence of spurious events it is impossible to rigorously demonstrate, experimentally, that Eq. (9) holds for a given physical system. Nevertheless, if the spurious event rate is sufficiently small, it is possible to demonstrate to a high degree of certainty with a finite amount of data that the inequality Eq. (8) is violated. It is shown here how a wave function can be constructed that satisfies Eq. (9) and for which the joint probability on the left-hand side of (6) is nonzero $$P_{pp}(,)0.$$ (10) Let the wave function be denoted by $`\psi _{\beta _1\beta _2}`$, depending on the representation. For example, $`\psi _{xp}`$ is the wave function in the representation in which the position coordinate of particle 1 and the momentum coordinate of particle 2 are employed. Eq. (9) imposes the following conditions on the wave function: $`\psi _{xx}(x_1,x_2)`$ $`=`$ $`0\mathrm{when}x_10\mathrm{and}x_20,`$ (11) $`\psi _{px}(p_1,x_2)`$ $`=`$ $`0\mathrm{when}p_10\mathrm{and}x_20,`$ (12) and $$\psi _{xp}(x_1,p_2)=0\mathrm{when}x_10\mathrm{and}p_20.$$ (13) A wave function satisfying these conditions can be constructed as follows: Let $`g(p_1,p_2)`$ be a function that is nonzero only when $`p_1`$ and $`p_2`$ are positive, i.e., $$g(p_1,p_2)=0\mathrm{if}p_10\mathrm{or}p_20.$$ (14) Its Fourier transform, denoted by $`f(x_1,x_2)`$, is $$f(x_1,x_2)=\frac{1}{2\pi }_{\mathrm{}}^{\mathrm{}}_{\mathrm{}}^{\mathrm{}}e^{i(p_1x_1+p_2x_2)}g(p_1,p_2)𝑑p_1𝑑p_2.$$ (15) The wave function $`\psi _{xx}`$ is then given by $$\psi _{xx}(x_1,x_2)=N[1\theta (x_1)\theta (x_2)]f(x_1,x_2)$$ (16) where $`\theta (x)`$ is the Heaviside function defined by $$\theta (x)=\{\begin{array}{cc}1& \mathrm{if}x0\\ 0& \mathrm{if}x<0\end{array}$$ (17) and $`N`$ is the normalization coefficient chosen so that $`\psi _{xx}(x_1,x_2)`$ is normalized: $$_{\mathrm{}}^{\mathrm{}}_{\mathrm{}}^{\mathrm{}}|\psi _{xx}(x_1,x_2)|^2=1.$$ (18) Eq. (11) is enforced by the factor in square brackets appearing in Eq. (16). That Eq. (12) is also satisfied is now demonstrated. $`\psi _{px}`$ is a Fourier transform of $`\psi _{xx}`$: $$\psi _{px}(p_1,x_2)=\frac{1}{\sqrt{2\pi }}_{\mathrm{}}^{\mathrm{}}e^{ip_1x_1}\psi _{xx}(x_1,x_2)𝑑x_1.$$ (19) But, from Eq. (16) this reduces to $$\psi _{px}(p_1,x_2)=\frac{N}{\sqrt{2\pi }}_{\mathrm{}}^{\mathrm{}}e^{ip_1x_1}f(x_1,x_2)𝑑x_1$$ (20) when $`x_20`$. Substituting Eq. (15) into this and carrying out the $`x_1`$ integration followed by a momentum integration yields $$\psi _{px}(p_1,x_2)=\frac{N}{\sqrt{2\pi }}_{\mathrm{}}^{\mathrm{}}e^{ip_2x_2}g(p_1,p_2)𝑑p_2$$ (21) when $`x_20`$. It is evident from Eq. (14) that the right-hand side of Eq. (21) is zero when $`p_10`$, that is, Eq. (12) is satisfied. A similar argument shows that the wave function of Eq. (16) also satisfies Eq. (13). Transforming Eq. (16) into the momentum representation for both particles yields, keeping Eq. (14) in mind, $`\psi _{pp}(p_1,p_2)`$ $`=`$ $`{\displaystyle \frac{N}{2\pi }}{\displaystyle _0^{\mathrm{}}}{\displaystyle _0^{\mathrm{}}}e^{i(p_1x_1+p_2x_2)}f(x_1,x_2)𝑑x_1𝑑x_2`$ (23) $`\mathrm{when}p_10\mathrm{and}p_20.`$ $`\psi _{pp}(p_1,p_2)`$ evaluated over this range is what is needed to compute $`P_{pp}(,)`$: $$P_{pp}(,)=_{\mathrm{}}^0_{\mathrm{}}^0|\psi _{pp}(p_1,p_2)|^2𝑑p_1𝑑p_2.$$ (24) If $`\psi _{pp}(p_1,p_2)0`$ over some region in the domain ($`p_1<0\mathrm{and}p_2<0`$), then a wave function has been constructed that violates the local realism condition Eq. (8). We now specialize to the case when $`g(p_1,p_2)`$ factorizes as follows: $$g(p_1,p_2)=g(p_1)g(p_2)$$ (25) where $$g(p)=0\mathrm{for}p0.$$ (26) Then $`f(x_1,x_2)`$ factorizes, $$f(x_1,x_2)=f(x_1)f(x_2),$$ (27) where $$f(x)=\frac{1}{\sqrt{2\pi }}_0^{\mathrm{}}e^{ipx}g(p)𝑑p.$$ (28) Also, Eq. (23) reduces to $$\psi _{pp}(p_1,p_2)=N\psi _p(p_1)\psi _p(p_2)\mathrm{when}p_10\mathrm{and}p_20$$ (29) where $$\psi _p(p)=\frac{1}{\sqrt{2\pi }}_0^{\mathrm{}}e^{ipx}f(x)𝑑x\mathrm{when}p0.$$ (30) Substituting Eq. (29) into Eq. (24) yields $$P_{pp}(,)=N^2\left[_{\mathrm{}}^0|\psi _p(p)|^2𝑑p\right]^2.$$ (31) As a specific example, let $`g(p)`$ be given by $$g(p)=\{\begin{array}{cc}\sqrt{2\lambda }e^{\lambda p}& \mathrm{for}p>0\\ 0& \mathrm{for}p0\end{array}.$$ (32) From this, using Eq. (28), one obtains $$f(x)=i\sqrt{\frac{\lambda }{\pi }}\frac{1}{x+i\lambda }.$$ (33) Substituting this into Eq. (16), using Eq. (27), and computing the norm, one obtains $$N=\frac{2}{\sqrt{3}}.$$ (34) Substituting Eq. (33) into Eq. (30) yields, for $`p0`$, $$\psi _p(p)=\frac{i}{\pi }\sqrt{\frac{\lambda }{2}}\left[_0^{\mathrm{}}\frac{\mathrm{cos}(|p|x)}{x+i\lambda }𝑑x+i_0^{\mathrm{}}\frac{\mathrm{sin}(|p|x)}{x+i\lambda }𝑑x\right].$$ (35) By breaking the right-hand side of this equation into real and imaginary parts and by making use of formulas given by Gradshteyn and Ryzhik (section 3.723, Eqs. 1 through 4), this equation simplifies to $$\psi _p(p)=\frac{i}{\pi }\sqrt{\frac{\lambda }{2}}e^{\lambda |p|}\mathrm{Ei}(\lambda |p|).$$ (36) From this one finds $$_{\mathrm{}}^0|\psi _p(p)|^2𝑑p=\frac{1}{2\pi ^2}_0^{\mathrm{}}e^{2x}\mathrm{Ei}^2(x)𝑑x.$$ (37) By performing a numerical integration of this equation we have found that $$_{\mathrm{}}^0|\psi _p(p)|^2𝑑p=\frac{1}{8}$$ (38) to one part in $`10^8`$. From Eqs. (31) and (34) one thus obtains $$P_{pp}(,)=\frac{1}{48}.$$ (39) Thus, for a system possessing the wave function described here, a local realism violating event in which the momenta of both particles are negative occurs at a rate of one event in 48 events. It has been shown here that local realism violating experiments of the Hardy type are, in principle, possible in which only position and momentum measurements are performed. A means of experimentally generating the appropriate states has not been offered, so it remains to be seen whether such states can be realized in practice. In this regard we derive hope from the fact that the experiment proposed by EPR became realizable 57 years later as an optical analogue (through the development of parametric down-converters and homodyne detectors) and we take heart in the fact that state synthesis is an active topic of research .
no-problem/9909/hep-ph9909260.html
ar5iv
text
# 1 Introduction ## 1 Introduction Curiously enough, the most model independent prediction of supersymmetric extensions of the Standard Model concerns a ’standard’ particle: the mass $`m_h`$ of the (lightest CP even) Higgs boson. Within the minimal supersymmetric extension of the Standard Model (MSSM) its mass is bounded, at tree level, by $$m_h^2M_Z^2\mathrm{cos}^22\beta $$ (1.1) where $`\mathrm{tan}\beta =h_1/h_2`$ ($`H_1`$ couples to up-type quarks in our convention). It has been realized already some time ago that loop corrections weaken this upper bound . These loop corrections depend on the top quark Yukawa coupling $`h_t`$ and the soft susy breaking parameters as the stop masses of $`O(M_{susy})`$. At the one loop level, given the present experimental errors on the top mass $`m_t`$ and assuming $`M_{susy}<\mathrm{\hspace{0.33em}1}`$ TeV, the upper limit on $`m_h`$ is $`<\mathrm{\hspace{0.33em}140}`$ GeV. Also two loop corrections to $`m_h`$ have been considered in the MSSM ; these have the tendency to lower the upper bound on $`m_h`$ by $`10`$ GeV. The subject of the present paper is the next-to-minimal supersymmetric extension of the Standard Model ((M+1)SSM) where a gauge singlet superfield $`S`$ is added to the Higgs sector. It allows to omit the so-called $`\mu `$ term $`\mu H_1H_2`$ in the superpotential of the MSSM, and to replace it by a Yukawa coupling (plus a singlet self coupling): $$W=\lambda SH_1H_2+\frac{\kappa }{3}S^3+\mathrm{}$$ (1.2) The superpotential (1.2) is thus scale invariant, and the electroweak scale appears only through the susy breaking terms. In view of ongoing Higgs searches at LEP2 and, in the near future, at Tevatron Run II , it is important to check the model dependence of bounds on the Higgs mass. In the (M+1)SSM, the upper bound on the mass $`m_1`$ of the lightest CP even Higgs<sup>1</sup><sup>1</sup>1As there are three CP even Higgs states in the (M+1)SSM, we denote them as $`S_i`$ with masses $`m_i`$, i=1..3, in increasing order. differs from the one of the MSSM already at tree level: now we have $$m_1^2M_Z^2\left(\mathrm{cos}^22\beta +\frac{2\lambda ^2}{g_1^2+g_2^2}\mathrm{sin}^22\beta \right)$$ (1.3) where $`g_1`$ and $`g_2`$ denote the $`U(1)_Y`$ and the $`SU(2)_L`$ gauge couplings. Note that, for $`\lambda <.53`$, $`m_1`$ is still bounded by $`M_Z`$ at tree level. Large values of $`\lambda `$, $`\lambda >.7`$, are in any case prohibited, if one requires the absence of a Landau singularity for $`\lambda `$ below the GUT scale . Loop corrections to $`m_1`$ have also been considered in the (M+1)SSM . Given $`m_t`$ and assuming again $`M_{susy}<1`$ TeV, the upper limit on $`m_1`$ at one loop is then $`150`$ GeV. Within the constrained (M+1)SSM (the C(M+1)SSM), where universal soft susy breaking terms at the GUT scale are assumed , $`\lambda `$ is always below $`.3`$, and the upper limit on $`m_1`$ reduces to the one of the MSSM (at one loop) of $`140`$ GeV. Two loop corrections in the (M+1)SSM have recently been considered in . Within the (M+1)SSM this is, however, not the end of the story: It is well known that now the lightest Higgs scalar $`S_1`$ can be dominantly a gauge singlet state. In this case it decouples from the gauge bosons and becomes invisible in Higgs production processes, and the lightest visible Higgs boson is then actually the second lightest one $`S_2`$. Fortunately, under these circumstances $`S_2`$ cannot be too heavy : In the extreme case of a pure singlet lightest Higgs, the mass $`m_2`$ of the next-to-lightest Higgs scalar is again below the upper limit designed originally for $`m_1`$. In general, however, mixed scenarios can be realized, with a weakly coupled (but not pure singlet) lightest Higgs, and a second lightest Higgs above the previous $`m_1`$ limits. Although analyses of the Higgs sector including these scenarios in the (M+1)SSM have been presented before we find that these should be improved: First, experimental errors on the top quark pole mass $`m_t^{pole}=173.8\pm 5.2`$ GeV have been reduced considerably, leading to stronger constraints on the top quark Yukawa coupling $`h_t`$ which determines to a large extent the radiative corrections to $`m_{1,2}`$. Second, at least the dominant two loop corrections to the effective potential should be taken into account, since they are not necessarily negligible. The purpose of the present paper is thus an analysis of the allowed masses and couplings to the gauge bosons of the lightest CP even Higgs scalars in the (M+1)SSM, including present constraints on $`m_t`$ and a two loop improvement of the Higgs potential. In the next section we present our method of obtaining the dominant two loop terms in the effective potential, and in section 3 we give the resulting upper bound on the lightest Higgs mass. Albeit this upper limit can be obtained analytically, the mass of the second lightest Higgs in relation to the coupling to the gauge bosons requires a numerical analysis. Our methods of scanning the parameter space of the model in two different scenarios (constrained and general (M+1)SSM) are presented in section 4. Results on the Higgs masses and couplings, and conclusions are presented in section 5. ## 2 Two loop corrections In order to obtain the correct upper limit on the Higgs boson mass in the presence of soft susy breaking terms, radiative corrections to several terms in the effective action have to be considered. Let us first introduce a scale $`QM_{susy}`$, where $`M_{susy}`$ is of the order of the susy breaking terms. Let us assume that quantum corrections involving momenta $`p^2>Q^2`$ have been evaluated; the resulting effective action $`\mathrm{\Gamma }_{eff}(Q)`$ is then still of the standard supersymmetric form plus soft susy breaking terms. Assuming correctly normalized kinetic terms (after appropriate rescaling of the fields), the $`Q`$ dependence of the parameters in $`\mathrm{\Gamma }_{eff}(Q)`$ is given by the supersymmetric $`\beta `$ functions (valid up to a possible GUT scale $`M_{GUT}`$). Often one is interested in relating the parameters in $`\mathrm{\Gamma }_{eff}(Q)`$ to more fundamental parameters at $`M_{GUT}`$. To this end one integrates the supersymmetric renormalization group equations between $`M_{GUT}`$ and $`QM_{susy}`$ to one or, if one whishes, to two loop accuracy. Note, however, that the limits on the Higgs boson mass depend exclusively on the parameters in $`\mathrm{\Gamma }_{eff}(Q)`$ at the scale $`QM_{susy}`$; the two loop contributions to the effective potential considered below serve to specify this dependence more precisely. The accuracy to which one has (possibly) related the parameters at the scale $`QM_{susy}`$ to parameters at a scale $`M_{GUT}`$ is completely irrelevant for the relation between the Higgs boson mass and the parameters at the scale $`QM_{susy}`$. One is left with the computation of quantum corrections to $`\mathrm{\Gamma }_{eff}`$ involving momenta $`p^2<Q^2`$. Subsequently the quantum corrections to the following terms in $`\mathrm{\Gamma }_{eff}`$ will play a role: 1. Corrections to the kinetic terms of the Higgs bosons. Due to gauge invariance the same quantum corrections contribute to the kinetic energy and to the Higgs-$`Z`$ boson couplings, which affect the relation between the Higgs vevs and $`M_Z`$; 2. Corrections to the Higgs-top quark Yukawa coupling; 3. Corrections to the Higgs effective potential. These corrections could, in principle, be decomposed into contributions to the Yukawa couplings $`\lambda `$ and $`\kappa `$ of eq. (1.2) and the soft terms (these contributions are the ones proportional to $`\mathrm{ln}Q^2`$ or, at two loop order, $`\mathrm{ln}^2Q^2`$), and ”non-supersymmetric” contributions which are $`Q^2`$ independent. These latter contributions to the effective potential are of the orders $`(vev)^n`$ with $`n>4`$ and become small in the case of large soft terms compared to the vevs. Our results in section 5 are based on the effective potential including these contributions (which are not necessarily numerically irrelevant), and there is no need to perform the decomposition of the radiative corrections to the effective potential explicitely. Let us start with the last item: The Higgs effective potential $`V_{eff}`$ can be developped in power of $`\mathrm{}`$ or loops as $$V_{eff}=V^{(0)}+V^{(1)}+V^{(2)}+\mathrm{}.$$ (2.1) Within the (M+1)SSM, we are interested in the dependence of $`V_{eff}`$ in three CP even scalar vevs $`h_1`$, $`h_2`$ and $`s`$ (assuming no CP violation in the Higgs sector). The tree level potential $`V^{(0)}`$ is determined by the superpotential (1.2) and the standard soft susy breaking terms . For completeness, and in order to fix our conventions, we give here the expression for $`V^{(0)}`$: $`V^{(0)}`$ $`=`$ $`m_{H_1}^2h_1^2+m_{H_2}^2h_2^2+m_S^2s^22\lambda A_\lambda h_1h_2s+{\displaystyle \frac{2}{3}}\kappa A_\kappa s^3`$ (2.2) $`+\lambda ^2h_1^2h_2^2+\lambda ^2(h_1^2+h_2^2)s^22\kappa \lambda h_1h_2s^2+\kappa ^2s^4`$ $`+{\displaystyle \frac{g_1^2+g_2^2}{8}}(h_1^2h_2^2)^2.`$ The one loop corrections to the effective potential are given by $$V^{(1)}=\frac{1}{64\pi ^2}\text{STr}M^4\left[\mathrm{ln}\left(\frac{M^2}{Q^2}\right)\frac{3}{2}\right],$$ (2.3) where we only take top and stop loops into account. The relevant field dependent masses are the top quark mass $$m_t=h_th_1$$ (2.4) and the stop mass matrix (in the $`(T_R^c,T_L)`$ basis) $$\left(\begin{array}{cc}m_T^2+m_t^2& m_t\stackrel{~}{A}_t\\ m_t\stackrel{~}{A}_t& m_Q^2+m_t^2\end{array}\right),$$ (2.5) where $`m_T`$, $`m_Q`$ are the stop soft masses and $$\stackrel{~}{A}_t=A_t\lambda s\mathrm{cot}\beta $$ (2.6) is the so-called stop mixing. In eq. (2.5) we have neglected the electroweak D terms which would only give small contributions to the effective potential in the relevant region $`m_T,m_QM_Z`$. The masses of the physical eigenstates $`\stackrel{~}{t}_1,\stackrel{~}{t}_2`$ then read $$m_{\stackrel{~}{t}_1,\stackrel{~}{t}_2}^2=M_{susy}^2+m_t^2\pm \sqrt{\delta ^2M_{susy}^4+m_t^2\stackrel{~}{A}_t^2}$$ (2.7) $`\text{with}M_{susy}^2{\displaystyle \frac{1}{2}}(m_Q^2+m_T^2)\text{and}\delta \left|{\displaystyle \frac{m_Q^2m_T^2}{m_Q^2+m_T^2}}\right|.`$ (2.8) Note that the top Yukawa coupling $`h_t`$ in eq. (2.4) and below is defined at the scale $`Q`$, cf. the discussion at the beginning of this section. In the case of large susy breaking terms compared to the vevs $`h_i`$, $`V^{(1)}`$ can be expanded in (even) powers of $`h_i`$. The terms quadratic in $`h_i`$ will not affect the upper bound on the Higgs mass (and can be absorbed into the unknown soft parameters $`m_{H_1}`$, $`m_{H_2}`$ and $`A_\lambda `$ in (2.2)). In the approximation where the stop mass splitting $`\delta `$ is small<sup>2</sup><sup>2</sup>2This approximation is well motivated in the C(M+1)SSM where we take universal soft terms at the GUT scale. On the other hand, we have checked numerically that, in the general (M+1)SSM, the lightest Higgs mass takes its maximal value for $`\delta 0`$., the quartic terms read $$V^{(1)}|_{h_i^4}=\frac{3h_t^4}{16\pi ^2}h_1^4\left(\frac{1}{2}\stackrel{~}{X}_t+t\right),$$ (2.9) where $`t\mathrm{ln}\left({\displaystyle \frac{M_{susy}^2+m_t^2}{m_t^2}}\right)`$ (2.10) and $`\stackrel{~}{X}_t2{\displaystyle \frac{\stackrel{~}{A}_t^2}{M_{susy}^2+m_t^2}}\left(1{\displaystyle \frac{\stackrel{~}{A}_t^2}{12(M_{susy}^2+m_t^2)}}\right).`$ (2.11) In our computations, however, we used the full expression (2.3) for $`V^{(1)}`$; we will use the quartic terms (2.9) in the next section only in order to compare our two loop result to those of refs. . Next, we consider the dominant two loop corrections. These will be numerically important only for large susy breaking terms compared to $`h_i`$, hence we will expand again in powers of $`h_i`$. Since the terms quadratic in $`h_i`$ can again be absorbed into the tree level soft terms, we just consider the quartic terms, and here only those which are proportional to large couplings: terms $`\alpha _sh_t^4`$ and $`h_t^6`$. Finally, we are only interested in leading logs (terms quadratic in $`t`$). The corresponding expression for $`V^{(2)}`$ can be obtained from the explicit two loop calculation of $`V_{eff}`$ in or, as we have checked explicitely, from the requirement that the complete effective potential has to satisfy the renormalization group equations also at scales $`Q<M_{susy}`$, provided the non-supersymmetric $`\beta `$ function for $`h_t`$ is used. One obtains in both cases $$V_{LL}^{(2)}=3\left(\frac{h_t^2}{16\pi ^2}\right)^2h_1^4\left(32\pi \alpha _s\frac{3}{2}h_t^2\right)t^2.$$ (2.12) Now, we turn to the quantum corrections to the Higgs boson kinetic terms. They lead to a wave function renormalization factor $`Z_{H_1}`$ in front of the $`D_\mu H_1D^\mu H_1`$ term with, to order $`h_t^2`$, $$Z_{H_1}=1+3\frac{h_t^2}{16\pi ^2}t$$ (2.13) Finally, the quantum corrections to the $`H_1`$-top quark Yukawa coupling $`h_t`$ have to be considered. After an appropriate rescaling of the $`H_1`$ and top quark fields in order to render their kinetic terms properly normalized, these quantum corrections lead to an effective coupling $`h_t(m_t)`$ with, to orders $`h_t^2`$, $`\alpha _s`$, $$h_t(m_t)=h_t(Q)\left(1+\frac{1}{32\pi ^2}\left(32\pi \alpha _s\frac{9}{2}h_t^2\right)t\right).$$ (2.14) In eqs. (2.13) and (2.14) the large logarithm $`t`$ is actually given by $`\mathrm{ln}\left(\frac{Q^2}{m_t^2}\right)`$ where $`Q^2`$ acts as a UV cutoff, cf. the discussion at the beginning of this section. In the relevant region $`M_{susy}m_t`$ the expression (2.10) for $`t`$ can be used here as well. The (running) top quark mass is then given by $$m_t(m_t)=h_t(m_t)Z_{H_1}^{1/2}h_1$$ (2.15) and the relation between the pole and running mass, to order $`\alpha _s`$, reads $$m_t^{pole}=m_t(m_t)\left(1+\frac{4\alpha _s}{3\pi }\right).$$ (2.16) ## 3 Upper bound on the lightest Higgs mass In this section we derive an analytic upper bound on the mass of the lightest Higgs scalar. First, we summarize our contributions to the effective potential. As it is already known, in the (M+1)SSM the upper bound on the lightest Higgs mass $`m_1`$ is saturated when its singlet component vanishes . One is then only interested in the $`h_i`$-dependent part of the effective potential. Assuming $`h_iM_{susy}`$, i.e. up to $`O(h_i^4)`$, one obtains from eqs. (2.2), (2.9) and (2.12) $`V_{eff}(h_1,h_2)`$ $`=`$ $`\stackrel{~}{m}_1^2h_1^2+\stackrel{~}{m}_2^2h_2^2\stackrel{~}{m}_3^2h_1h_2+{\displaystyle \frac{g_1^2+g_2^2}{8}}(h_1^2h_2^2)^2`$ (3.1) $`+\lambda ^2h_1^2h_2^2+{\displaystyle \frac{3h_t^2}{16\pi ^2}}h_1^4\left({\displaystyle \frac{1}{2}}\stackrel{~}{X}_t+t\right)`$ $`+3\left({\displaystyle \frac{h_t^2}{16\pi ^2}}\right)^2h_1^4\left(32\pi \alpha _s{\displaystyle \frac{3}{2}}h_t^2\right)t^2`$ with $`\stackrel{~}{m}_1^2`$ $`=`$ $`m_{H_1}^2+\lambda ^2s^2+\text{rad. corrs.},`$ $`\stackrel{~}{m}_2^2`$ $`=`$ $`m_{H_2}^2+\lambda ^2s^2+\text{rad. corrs.},`$ (3.2) $`\stackrel{~}{m}_3^2`$ $`=`$ $`2\lambda s(A_\lambda +\kappa s)+\text{rad. corrs.}.`$ The radiative corrections in (3.2) stem from the contributions to $`V^{(1)}`$ and $`V^{(2)}`$ quadratic in $`h_i`$. In the large $`\mathrm{tan}\beta `$ regime (which saturates the upper bound on the lightest Higgs in the MSSM), one is left with only one non-singlet light Higgs $`h_1`$ and (3.1) simplifies to $$V_{eff}(h_1)=\stackrel{~}{m}_1^2h_1^2+\stackrel{~}{\lambda }h_1^4$$ (3.3) with $$\stackrel{~}{\lambda }=\frac{g_1^2+g_2^2}{8}+\frac{3h_t^2}{16\pi ^2}\left(\frac{1}{2}\stackrel{~}{X}_t+t\right)+3\left(\frac{h_t^2}{16\pi ^2}\right)^2\left(32\pi \alpha _s\frac{3}{2}h_t^2\right)t^2.$$ (3.4) (Note that in the large $`\mathrm{tan}\beta `$ regime $`\stackrel{~}{A}_t=A_t`$ and no dependence on the (M+1)SSM coupling $`\lambda `$ is left in $`\stackrel{~}{\lambda }`$.) Now, we can change the variable $`h_1`$ and replace it by a variable $`h_1^{}`$ in terms of which the kinetic term is properly normalized, so that we have $$M_Z^2=\frac{g_1^2+g_2^2}{2}h_1^2.$$ (3.5) From eq. (2.13) one finds $$h_1^2h_1^2\left(1\frac{3h_t^2}{16\pi ^2}t\right).$$ (3.6) In terms of $`h_1^{}`$ the effective potential reads $$V_{eff}(h_1^{})=\stackrel{~}{m}_1^2h_1^2+\stackrel{~}{\lambda }^{}h_1^4$$ (3.7) with $$\stackrel{~}{m}_1^2=\stackrel{~}{m}_1^2\left(1\frac{3h_t^2}{16\pi ^2}t\right),\stackrel{~}{\lambda }^{}=\stackrel{~}{\lambda }\left(1\frac{3h_t^2}{16\pi ^2}t\right)^2.$$ (3.8) Second, recall that $`h_t`$ in the one loop contribution to eq. (3.1) is given by the Yukawa coupling at the scale $`Q`$. Hence, we can replace $`h_t(h_t(Q))`$ in $`\stackrel{~}{\lambda }^{}`$ by $`h_t(m_t)`$ using eq. (2.14), which allows to relate it directly to the running top quark mass. Eq. (2.15) now reads $`m_t(m_t)=h_t(m_t)h_1^{}`$. From (3.7), one obtains the mass $`m_h`$ of the lightest non-singlet Higgs in the case where the singlet decouples (and in the large $`\mathrm{tan}\beta `$ regime) $$m_h^2=\frac{1}{2}\frac{d^2V_{eff}}{dh_1^2}|_{min}=4\stackrel{~}{\lambda }^{}h_1^2|_{min}.$$ (3.9) This is just the correct running Higgs mass, but does not include the pole mass corrections, which involve no large logarithms and which we will neglect throughout this paper. Using (3.5) and expanding $`\stackrel{~}{\lambda }^{}`$ to the appropriate powers of $`t`$, the expression for $`m_h^2`$ becomes<sup>3</sup><sup>3</sup>3In eq. (3) and below in eq. (3) we omit the argument of $`h_t`$ wherever its choice corresponds to a higher order effect. $`m_h^2`$ $`=`$ $`M_Z^2\left(1{\displaystyle \frac{3h_t^2}{8\pi ^2}}t\right)`$ $`+{\displaystyle \frac{3h_t^2(m_t)}{4\pi ^2}}m_t^2(m_t)\left({\displaystyle \frac{1}{2}}\stackrel{~}{X}_t+t+{\displaystyle \frac{1}{16\pi ^2}}\left({\displaystyle \frac{3}{2}}h_t^232\pi \alpha _s\right)(\stackrel{~}{X}_t+t)t\right)`$ which agrees with the MSSM result in . (Note, however, that the coefficient of the term $`\stackrel{~}{X}_tt`$ on the right hand side of (3) is not necessarily correct, since we would obtain terms of the same order if we would take into account simple logarithms in the two loop correction $`V^{(2)}`$ to the potential.) The same procedure can be applied for general values of $`\mathrm{tan}\beta `$. Then, one has to consider the 2x2 mass matrix $`\frac{1}{2}(_{h_i}_{h_j}V_{eff}),i,j=1,2`$ where the $`h_i`$ are properly normalized. Its smallest eigenvalue gives the following upper bound on the mass $`m_1`$ of the lightest Higgs boson for arbitrary mixings among the 3 states $`(h_1,h_2,s)`$ (which can be saturated if the lightest Higgs boson has a vanishing singlet component) $`m_1^2`$ $``$ $`M_Z^2\left(\mathrm{cos}^22\beta +{\displaystyle \frac{2\lambda ^2}{g_1^2+g_2^2}}\mathrm{sin}^22\beta \right)\left(1{\displaystyle \frac{3h_t^2}{8\pi ^2}}t\right)`$ $`+{\displaystyle \frac{3h_t^2(m_t)}{4\pi ^2}}m_t^2(m_t)\mathrm{sin}^2\beta \left({\displaystyle \frac{1}{2}}\stackrel{~}{X}_t+t+{\displaystyle \frac{1}{16\pi ^2}}\left({\displaystyle \frac{3}{2}}h_t^232\pi \alpha _s\right)(\stackrel{~}{X}_t+t)t\right).`$ The only difference between the MSSM bound and (3) is the ’tree level’ term $`\lambda ^2\mathrm{sin}^22\beta `$. This term is important for moderate values of $`\mathrm{tan}\beta `$. Hence, the maximum of the lightest Higgs mass in the (M+1)SSM is not obtained for large $`\mathrm{tan}\beta `$ as in the MSSM, but rather for moderate $`\mathrm{tan}\beta `$ (as confirmed by our numerical analysis, cf. section 5). On the other hand, the radiative corrections are identical in the (M+1)SSM and in the MSSM. In particular, the linear dependence in $`\stackrel{~}{X}_t`$ is the same in both models. Hence, from eq. (2.11), the upper bound on $`m_1^2`$ is maximized for $`\stackrel{~}{X}_t=6`$ (corresponding to $`\stackrel{~}{A}_t=\sqrt{6}M_{susy}`$, the ’maximal mixing’ case), and minimized for $`\stackrel{~}{X}_t=0`$ (corresponding to $`\stackrel{~}{A}_t=0`$, the ’no mixing’ case). ## 4 Parametrization of the (M+1)SSM Eq. (3) gives an upper bound on the lightest Higgs mass $`m_1`$ regardless of its coupling to the gauge bosons. In the extreme case of a pure singlet lightest Higgs, the next-to-lightest Higgs is non-singlet and the upper bound (3) actually applies to $`m_2`$. On the other hand, it can occur that the lightest Higgs is weakly coupled to gauge bosons (without being a pure singlet) and $`m_2`$ is above the limit (3). This case requires a numerical analysis, which will be performed in the next section. First, let us present our methods of scanning the parameter space of the (M+1)SSM. Not counting the known gauge couplings, the parameters of the model are $$\lambda ,\kappa ,h_t,A_\lambda ,A_\kappa ,A_t,m_{H_1}^2,m_{H_2}^2,m_S^2,m_Q^2,m_T^2$$ (4.1) where $`h_t`$ is eventually fixed by the top mass and an overall scale of the dimensionful parameters by the $`Z`$ mass. Now, let us see how to handle this high dimensional parameter space in two different scenarios. ### 4.1 Constrained (M+1)SSM In the C(M+1)SSM the soft terms are assumed universal at the GUT scale, the global minimum of the effective potential has to be the global minimum and present experimental constraints on the sparticle and Higgs masses are applied. The free parameters can be chosen as the GUT scale dimensionless parameters $$\lambda _0,\kappa _0,h_{t0},\frac{A_0}{M_{1/2}},\frac{m_0^2}{M_{1/2}^2}$$ (4.2) where $`A_0`$, $`M_{1/2}`$ and $`m_0^2`$ are the universal trilinar coupling, gaugino mass and scalar mass respectively. In order to scan the 5-dimensional parameter space of the C(M+1)SSM, we proceed as in refs. : First, we scan over the GUT scale parameters (4.2) and integrate numerically the renormalization group equations down to the susy scale in each case. Then, we minimize the complete two loop effective potential in order to obtain the Higgs vevs $`h_1,h_2,s`$. In principle, we could have followed the same procedure as in section 3 to obtain the dominant two loop corrections, i.e. replacing $`h_1`$ by $`h_1^{}`$ and $`h_t`$ by $`h_t(m_t)`$. However, in order to obtain numerically correct results also in the regime $`M_{susy}<1`$ TeV, we did not expand $`V^{(1)}`$ in powers of $`h_i/M_{susy}`$, i.e. we used the full expression (2.3) for $`V^{(1)}`$. Then it becomes inconvenient to perform the field redefinition (3.6), which is implicitly non-linear due to the $`h_1`$ dependence of $`t`$ via $`m_t`$. Therefore we proceed differently: For a given set of low energy parameters, which are implicitly obtained at the scale $`QM_{susy}`$, we minimize directly $$V_{eff}=V^{(0)}+V^{(1)}+V^{(2)}$$ (4.3) with $`V^{(0)}`$ as in (2.2), $`V^{(1)}`$ as in (2.3) and $`V^{(2)}`$ as in (2.12). Points in the parameter space leading to deeper unphysical minima of the effective potential with $`h_i=0`$ or $`s=0`$ are removed. The overall scale is then fixed by relating the vevs $`h_i`$ to the physical $`Z`$ mass through $$M_Z^2=\frac{1}{2}(g_1^2+g_2^2)(Z_{H_1}^2h_1^2+h_2^2)$$ (4.4) with $`Z_{H_1}`$ as in eq. (2.13). Next, we throw away all points in the parameter space where the top quark mass (including corrections (2.14) to $`h_t`$) does not correspond to the measured $`m_t^{pole}=173.8\pm 5.2`$ GeV. We also ask for sfermions with masses $`m_{\stackrel{~}{f}}>M_Z/2`$ and gluinos with masses $`m_{\stackrel{~}{g}}>\mathrm{\hspace{0.33em}200}`$ GeV. Finally, the correct 3x3 Higgs mass matrix is related to the matrix of second derivatives of the Higgs potential at the minimum after dividing $`\frac{1}{2}_{h_1}^2V_{eff}`$ by $`Z_{H_1}`$, and $`\frac{1}{2}_{h_1}_{h_2}V_{eff}`$ and $`\frac{1}{2}_{h_1}_sV_{eff}`$ by $`Z_{H_1}^{1/2}`$. For each point in the parameter space, we then obtain the two loop Higgs boson masses and couplings to gauge bosons. Then, we apply present constraints from negative Higgs search at LEP (cf. section 5 for details). The results in section 5 are based on scannings over $`10^6`$ points in the parameter space. The essential effect of all constraints within the C(M+1)SSM is to further reduce the allowed range for the Yukawa coupling $`\lambda `$ to $`\lambda <\mathrm{\hspace{0.33em}.3}`$. ### 4.2 General (M+1)SSM In the general (M+1)SSM, we only assume that we are in a local minimum of the effective potential (4.3) and the running Yukawa couplings $`\lambda ,\kappa ,h_t`$ are free of Landau singularities below the GUT scale. In order to scan the high dimensional parameter space (4.1) of the general (M+1)SSM we proceed as follows: First, we use the three minimization equations of the full effective potential (4.3) with respect to $`h_1`$, $`h_2`$ and $`s`$ in order to eliminate the parameters $`m_{H_1}^2`$, $`m_{H_2}^2`$ and $`m_S^2`$ in favour of the three Higgs vevs. Using the relation (4.4), we replace $`h_1,h_2`$ by $`\mathrm{tan}\beta `$ and $`M_Z`$. Finally, eqs. (2.14), (2.15) and (2.16) allow us to express $`h_t`$ in terms of $`m_t^{pole}`$ and the other parameters. We are then left with six ’tree level’ parameters $`\lambda `$, $`\kappa `$, $`A_\lambda `$, $`A_\kappa `$, $`s`$, $`\mathrm{tan}\beta `$, and three parameters appearing only through the radiative corrections, which we choose as $`\stackrel{~}{A}_t`$, $`M_{susy}`$ and $`\delta `$, as defined in eqs. (2.6) and (2.8). Requiring that the Yukawa couplings are free of Landau singularities below the GUT scale and using the renormalization group equations of the (M+1)SSM , one obtains upper limits on $`\lambda ,\kappa ,h_t`$ at the susy scale. The latter turns into a lower bound on $`\mathrm{tan}\beta `$ depending mainly on $`m_t^{pole}`$ and $`M_{susy}`$. As expected from eq. (3), we observe that upper limits on Higgs masses are obtained when $`\lambda `$ is maximal. From the renormalization group equations, one finds that the upper limit on $`\lambda `$ increases with decreasing $`\kappa `$, thus we choose $`\kappa 0`$ and $`\lambda =\lambda _{max}.7`$ (which still depends on $`h_t`$, i.e. on $`\mathrm{tan}\beta `$). As already mentionned, one can see from eq. (3) that the lightest Higgs mass is maximized for moderate values of $`\mathrm{tan}\beta `$. Hence, except in fig. 4 where $`\mathrm{tan}\beta `$ varies, we fix $`\mathrm{tan}\beta =2.7`$ which, as we shall see, maximizes the Higgs masses for $`m_t^{pole}=173.8`$ GeV. Unless stated otherwise, the upper limits on the Higgs masses presented in the next section are given in the maximal mixing scenario ($`\stackrel{~}{A}_t=\sqrt{6}M_{susy}`$). We have also found that Higgs masses are maximized for small values of $`\delta `$ and fixed $`\delta =0`$ (thus $`m_Q=m_T=M_{susy}`$). In order to obtain the results presented in the next section, we have used numerical routines to maximize the Higgs masses with respect to the remaining three parameters $`A_\lambda ,A_\kappa ,s`$. ## 5 Reduced couplings versus mass bounds Let us start with the mass $`m_1`$ of the lightest Higgs scalar, independently of its coupling to gauge bosons. The upper limit on $`m_1`$ in the general (M+1)SSM is plotted in fig. 1 (straight line) as a function of $`M_{susy}`$ (for $`m_t^{pole}=173.8`$ GeV). This limit is well above the one of the MSSM because of the additional tree level contribution to $`m_1^2`$ proportional to $`\lambda ^2M_Z^2`$ (cf. eq. (1.3)). At $`M_{susy}=1`$ TeV we have $`m_1133.5`$ GeV (in agreement with the analytic approximation (3)); at $`M_{susy}=3`$ TeV this upper limit increases only by $`3`$ GeV. This weak dependence on $`M_{susy}`$ is due to the negative two loop contributions to $`m_1`$. Within the C(M+1)SSM, the combined constraints on the parameter space require $`\lambda `$ to be small, $`\lambda <\mathrm{\hspace{0.33em}.3}`$ . Accordingly, the upper limit on $`m_1`$ is very close to the one of the MSSM. It is shown as crosses in fig. 1, and reaches 120 GeV at $`M_{susy}=1`$ TeV. In the following, we shall assume $`M_{susy}=1`$ TeV. A sideremark on the behavior for small $`M_{susy}`$ is in order: From eq. (2.7), it is obvious that, in the assumed limit $`\delta 0`$, the assumption of maximal stop mixing ($`\stackrel{~}{A}_t=\sqrt{6}_{susy}`$) cannot be maintained for $$\frac{\sqrt{6}\sqrt{2}}{2}m_t<M_{susy}<\frac{\sqrt{6}+\sqrt{2}}{2}m_t,$$ (5.1) because it would imply a negative stop mass squared. Therefore, in the general (M+1)SSM, we choose $`\stackrel{~}{A}_t`$ in this regime such that the lightest stop mass squared remains positive. On the other hand, within the C(M+1)SSM, where soft susy breaking terms are related, the limit $`M_{susy}`$ small is not feasable since it would contradict the negative results on sparticle searches. As discussed in the introduction, the upper limit on $`m_1`$ is not necessarily physically relevant, since the coupling of the lightest Higgs to the $`Z`$ boson can be very small. Actually, this phenomenon can also appear in the MSSM, if $`\mathrm{sin}^2(\beta \alpha )`$ is small. However, the CP odd Higgs boson $`A`$ is then necessarily light ($`m_Am_h<M_Z`$ at tree level), and the process $`ZhA`$ can be used to cover this region of the parameter space in the MSSM. In the (M+1)SSM, a small gauge boson coupling of the lightest Higgs $`S_1`$ is usually related to a large gauge singlet component, in which case no (strongly coupled) light CP odd Higgs boson is available. Hence, Higgs searches in the (M+1)SSM have possibly to rely on the search for the second lightest Higgs scalar $`S_2`$. Let us now define $`R_i`$ as the square of the coupling $`ZZS_i`$ divided by the corresponding standard model Higgs coupling: $$R_i=(S_{i1}\mathrm{sin}\beta +S_{i2}\mathrm{cos}\beta )^2$$ (5.2) where $`S_{i1},S_{i2}`$ are the $`H_1,H_2`$ components of the CP even Higgs boson $`S_i`$, respectively. Evidently, we have $`0R_i1`$ and unitarity implies $$\underset{i=1}{\overset{3}{}}R_i=1.$$ (5.3) Fortunately, as it was already mentionned, in the extreme case $`R_10`$ the upper limit on $`m_2`$ is the same as the above upper limit on $`m_1`$. On the other hand, scenarios with, e.g., $`R_1R_21/2`$ are possible. In the following we will discuss these situations in detail. We are interested in upper limits on the two lightest CP even Higgs bosons $`S_{1,2}`$. These are obtained in the limit where the third Higgs, $`S_3`$, is heavy and decouples, i.e. $`R_30`$ (This is the equivalent of the so called decoupling limit in the MSSM: the upper bound on the lightest Higgs $`h`$ is saturated when the second Higgs $`H`$ is heavy and decouples). Hence, we have $`R_1+R_21`$. In the regime $`R_11/2`$ experiments will evidently first discover the lightest Higgs (with $`m_1133.5`$ GeV for $`M_{susy}=1`$ TeV). The ’worst case scenario’ in this regime corresponds to $`m_1133.5`$ GeV, $`R_11/2`$; the presence of a Higgs boson with these properties has to be excluded in order to test this part of the parameter space of the general (M+1)SSM. The regime where $`R_1<1/2`$ (and hence $`1/2<R_21`$) is more delicate: here the lightest Higgs may escape detection because of its small coupling, and it may be easier to detect the second lightest Higgs. In fig. 2 we show the upper limit on $`m_2`$ as a function of $`R_2`$ in the general (M+1)SSM as a thin straight line. For $`R_21`$ (corresponding to $`R_10`$) we obtain the announced result: the upper limit on a Higgs boson with $`R1`$ is always given by the previous upper limit on $`m_1`$, even if the corresponding Higgs boson is actually the second lightest one. The same applies, of course, to the C(M+1)SSM where the upper limit on $`m_2`$ is also indicated as crosses in fig. 2. In the following we will discuss this ’delicate’ regime, $`R_1<1/2`$ and $`1/2<R_21`$, in some detail: Fortunately, one finds that the upper limit on $`m_2`$ is saturated only when the mass $`m_1`$ of the lightest Higgs boson tends to 0. Clearly, one has to take into account the constraints from Higgs boson searches which apply to reduced couplings $`R<1/2`$ – i.e. lower limits on $`m_1`$ as a function of $`R_11R_2`$ – in order to obtain realistic upper limits on $`m_2`$ vs $`R_2`$. Lower limits on $`m_1`$ as a function of $`R_1`$ (in the regime $`R_1<1/2`$) have been obtained at LEP . We use the following analytic approximation for the constraints on $`R_1`$ vs $`m_1`$ in this regime: $$\mathrm{log}_{10}R_1<\frac{m_1}{45\text{GeV}}2$$ (5.4) The resulting upper limit on $`m_2`$ is shown in fig. 2 as a thick straight line. This constraint is automatically included in the C(M+1)SSM results (crosses). Present and future Higgs searches at LEP will lead to more stringent constraints in the regime $`.1<R_1<1/2`$ . We approximate the possible constraints from a run at 198 GeV c.m. energy and 200 pb<sup>-1</sup> by $$\mathrm{ln}R_1<2\left(\frac{m_1}{98\text{GeV}}\right)^43$$ (5.5) The resulting upper limit on $`m_2`$ is shown in fig. 2 as a thick dashed line. It would be desirable to have the upper limit on $`m_2`$ in the general (M+1)SSM for arbitrary lower limits on $`m_1`$ as a function of $`R_1`$. To this end we have produced fig. 3. The different dotted curves show the upper limit on $`m_2`$ as a function of $`R_2`$ for different lower limits on $`m_1`$ (as indicated on each curve) as a function of $`R_1`$ (as indicated at the top of fig. 3). In practice, fig. 3 can be used to obtain upper limits on the mass $`m_2`$, in the regime $`R_1<1/2`$, for arbitrary experimental lower limits on the mass $`m_1`$: For each value of the coupling $`R_1`$, which corresponds to a vertical line in fig. 3, one has to find the point where this vertical line crosses the dotted curve associated to the corresponding experimental lower limit on $`m_1`$. Joining these points by a curve leads to the upper limit on $`m_2`$ as a function of $`R_2`$. We have indicated again the present LEP limit (5.4), already shown in fig. 2, which excludes the shaded region ($`m_2>172.5`$ GeV for $`R_2=.5`$, $`m_2>150`$ GeV for $`R_2=.75`$, etc). We have also shown again the possible LEP2 constraints on $`m_2`$ arising from (5.5) as a thick dashed line. Lower experimental limits on a Higgs boson with $`R>1/2`$ restrict the allowed regime for $`m_2`$ (for $`R_2>1/2`$) in fig. 3 from below. The present lower limits on $`m_2`$ from LEP are not visible in fig. 3, since we have only shown the range $`m_2>130`$ GeV. Possibly Higgs searches at the Run II of the Tevatron push the lower limits on $`m_2`$ upwards into this range. This would be necessary if one aims at an exclusion of the ’delicate’ regime of the (M+1)SSM: Then, lower limits on the mass $`m_2`$ – for any value of $`R_2`$ between $`1/2`$ and 1 – of at least 133.5 GeV are required; the precise experimental lower limits on $`m_2`$ as a function of $`R_2`$, which would be needed to this end, will depend on the achieved lower limits on $`m_1`$ as a function of $`R_1`$ in the regime $`R_1<1/2`$. In principle, from eq. (5.3), one could have $`R_2>R_1`$ with $`R_2`$ as small as $`1/3`$. However, in the regime $`1/3<R_2<1/2`$, the upper bound on $`m_2`$ as a function of $`R_2`$ for different fixed values of $`m_1`$ can only be saturated if $`R_1=R_2`$. Then it is sufficient to look for a Higgs boson with a coupling $`1/3<R<1/2`$ and a mass $`m<\mathrm{\hspace{0.33em}133.5}`$ GeV to cover this region of the parameter space of the (M+1)SSM. Finally, we consider the dependence of the upper bounds on $`m_{1,2}`$ on $`\mathrm{tan}\beta `$ and the top quark pole mass. In fig. 4 we plot the upper limit on $`m_{1,2}`$ (for $`R_{1,2}=1`$) against $`\mathrm{tan}\beta `$ for $`m_t^{pole}=173.8`$ GeV as a thick straight line. Remarkably, as announced before, this $`\mathrm{tan}\beta `$ dependence is very different from the MSSM: the maximum is assumed for $`\mathrm{tan}\beta 2.7`$ (with $`m_{1,2}133.5`$ GeV in agreement with figs. 2,3). The origin of this $`\mathrm{tan}\beta `$ dependence is the tree level contribution $`\lambda ^2\mathrm{sin}^2\beta `$ to (3.11). The height and the location of the maximum varies somewhat with $`m_t^{pole}`$; the thick dashed and dotted curves correspond to $`m_t^{pole}=173.8\pm 5.2`$ GeV, respectively. The absolute maximum is at $`\mathrm{tan}\beta 3`$ with $`m_{1,2}135`$ GeV. In the ’delicate’ regime, where one has to search for the second lightest Higgs with $`R_2`$ between $`1/2`$ and $`1`$, one could worry whether the $`\mathrm{tan}\beta `$ dependence of the upper limit on $`m_2`$ is different. This is not the case: As a thin straight line we show the upper limit on $`m_2`$ in the extreme case $`R_2=1/2`$ and $`m_t^{pole}=173.8`$ GeV (where the LEP constraint (5.4) is taken into account), which assumes again its maximum for $`\mathrm{tan}\beta 2.7`$ (now with $`m_2172.5`$ GeV in agreement with figs. 2,3). As above, the thin dashed and dotted curves correspond to $`m_t^{pole}=173.8\pm 5.2`$ GeV, respectively, and the absolute maximum is at $`\mathrm{tan}\beta 3`$ with $`m_2175.5`$ GeV. Within the C(M+1)SSM, where $`\lambda `$ is small, the dependence of the upper limit on $`m_2`$ on $`\mathrm{tan}\beta `$ ressembles more to the one of the MSSM as shown as crosses in fig. 4. To conclude, we have studied the CP even Higgs sector of the general (M+1)SSM and the C(M+1)SSM including the dominant two loop corrections to the effective potential. We have emphasized the need to search for Higgs bosons with reduced couplings, which are possible within this model. Our main results are presented in fig. 3, which allows to obtain the constraints on the Higgs sector of the model both from searches for Higgs bosons with weak coupling ($`R<1/2`$), and strong coupling ($`R>1/2`$). The necessary (but not sufficient) condition for testing the complete parameter space of the (M+1)SSM is to rule out a CP even Higgs boson with a coupling $`1/3<R<1`$ and a mass below 135 GeV. The sufficient condition (i.e. the precise upper bound on $`m_2`$ vs $`R_2`$) depends on the achieved lower bound on the mass of a ’weakly’ coupled Higgs (with $`0<R<1/2`$) and can be obtained from fig. 3. At the Tevatron this would probably require an integrated luminosity of up to 30 fb<sup>-1</sup> . If this cannot be achieved, and no Higgs is discovered, we will have to wait for the results of the LHC in order to see whether supersymmetry beyond the MSSM is realized in nature. ## Figure Captions * Upper limits on the mass $`m_1`$ of the lightest CP even Higgs boson versus $`M_{susy}`$ in the general (M+1)SSM (straight line) and the C(M+1)SSM (crosses). * Upper limits on the mass $`m_2`$ of the second lightest CP even Higgs (in the regime $`R_2>1/2`$) against $`R_2`$ in the general (M+1)SSM (thin straight line); the general (M+1)SSM with LEP constraints (5.4) (thick straight line); the general (M+1)SSM with expected LEP2 constraints (5.5) (thick dashed line); the C(M+1)SSM with LEP constraints (5.4) (crosses). * Upper limits on the mass $`m_2`$ against $`R_2`$, for different lower limits on the mass $`m_1`$ (as indicated on each line in GeV) of the lightest Higgs boson for $`1/2<R_2<1`$. $`R_1=1R_2`$ is shown on the the top axis. The boundary of the shaded area corresponds to the thick line in fig. 2, also the dashed line is the same as in fig. 2. * Upper limits on $`m_{1,2}`$ with $`R_{1,2}=1`$ (thick lines), and upper limits on $`m_2`$ with $`R_2=1/2`$ (thin lines) versus $`\mathrm{tan}\beta `$ in the general (M+1)SSM for $`m_t^{pole}=173.8`$ GeV (straight), $`179`$ GeV (dashed) and $`168.6`$ GeV (dotted); upper limit on $`m_2`$ in the C(M+1)SSM (crosses) for $`m_t^{pole}=173.8\pm 5.2`$ GeV. The LEP constraints (5.4) are taken into account in each case.
no-problem/9909/hep-lat9909038.html
ar5iv
text
# Exotic Quarkonia from Anisotropic Lattices ## 1 INTRODUCTION Heavy quarkonia are an interesting testing ground for our theoretical understanding of QCD. Many models have been developed to describe the vast amount of experimental data, but more accurate predictions from first principles will be needed for future experiments with heavy quark systems. However, the conventional lattice approach becomes overly expensive if one tries to accomodate all the non-relativistic energy scales on a single grid: $`Mv^2MvM`$. Here $`v`$ is the small velocity of the heavy quark with mass $`M`$. The non-relativistic nature of such problems has frequently been employed to formulate effective theories in which spatial and temporal directions are treated differently at the level of the quark action . In particular the NRQCD approach has lead to very precise calculations of the low lying spectrum in heavy quarkonia Additional problems arise if high energetic excitations are to be resolved on isotropic lattices. Very often the temporal discretisation is too coarse and the correlator of such heavy states cannot be measured accurately for long times. Glueballs and hybrid states are among the prominent examples of such high-lying excitations and they receive much theoretical and experimental attention as they are non-perturbative revelations of the gluon degrees of freedom in QCD. Early lattice studies have predicted such states starting at around 1.4 GeV, but those results were spoilt by large statistical uncertainties . This suggests that the inverse lattice spacing should at least be 3 GeV or higher. More recently anisotropic lattices have been used to circumvent this problem in glueball calculations by giving the lattice a fine temporal resolution whilst maintaining a coarse discretisation in the spatial direction . The success of this approach has triggered new efforts to measure hybrid potentials in an anisotropic gluon background and a comparative NRQCD analysis has demonstrated the validity of this adiabatic approximation for $`b\overline{b}g`$ hybrids. These attempts were reviewed in . In a previous study we reported on first quantitative results for charmonium and bottomonium hybrid states from anisotropic lattices . Here we extend those methods to study also other excitations and the spin structure in heavy quarkonia more carefully. In Section 2 we present the details of our calculation and results for the spin-averaged spectrum. In Section 3 we investigate the spin structure in heavy quarkonia and report on novel results for the splittings in heavy hybrids and D-states. ## 2 SPIN-AVERAGED SPECTRUM In order to study excited states with small statistical errors it is mandatory to have a fine resolution in the temporal lattice direction, along which we measure the multi-exponential decay of meson correlators. To this end we employ an anisotropic and spatially coarse gluon action: $`S=\beta {\displaystyle \underset{x,\mathrm{i}>\mathrm{j}}{}}\xi ^1\left\{{\displaystyle \frac{5}{3}}P_{\mathrm{ij}}{\displaystyle \frac{1}{12}}\left(R_{\mathrm{ij}}+R_{\mathrm{ji}}\right)\right\}`$ $`\beta {\displaystyle \underset{x,\mathrm{i}}{}}\xi \left\{{\displaystyle \frac{4}{3}}P_{\mathrm{it}}{\displaystyle \frac{1}{12}}R_{\mathrm{it}}\right\}.`$ (1) Here $`(\beta ,\xi )`$ are two parameters, which determine the gauge coupling and the anisotropy of the lattice. Action (1) is Symanzik-improved and involves plaquette terms, $`P_{\mu \nu }`$, as well as rectangles, $`R_{\mu \nu }`$. It is designed to be accurate up to $`𝒪(a_s^4,a_t^2)`$, classically. To reduce the radiative corrections we invoked mean-field improvement and divided all spatial and temporal links by a ’tadpole’ coefficient $`u_{0s}`$ and $`u_{0t}`$, respectively. We determined those coefficents self-consistently by measuring spatial and temporal plaquettes: $`u_{0s}=\mathrm{Tr}P_{ij}^{1/4}`$ and $`u_{0t}=\mathrm{Tr}P_{it}^{1/4}`$. With this prescription we expect only small deviations of $`\xi `$ from its tree-level value $`a_s/a_t`$. To describe the forward propagation of heavy quarks in the gluon background ($`A_\mu `$) we used the NRQCD approach introduced in : $$G_{t+a_t}=\mathrm{exp}\left[a_t\left(H(g,M)+igA_t\right)\right]G_t.$$ (2) Here the NRQCD Hamiltonian, $`H`$, is designed to account for relativistic corrections and includes spin-dependent operators up to $`𝒪(mv^6)`$. The implementation details of Equation (2) on anisotropic lattices are given in . Since we are working with spatially coarse lattices it is crucial to improve all lattices derivatives and colour-electromagnetic fields in Equation (2). Following the prescription of we also achieved an accuracy of $`𝒪(a_s^4,a_t^2)`$ in the quark sector. At each value of the coupling, $`\beta `$, we carefully tuned the heavy quark mass, $`M`$, so as to reproduce the experimental ratios $`M_{kin}/(1P1S)`$ very accurately. Fortunately, the spin-independent quantities, such as $`1P1S`$, are not very sensitive to the actual value of $`M`$, but the spin structure will show a strong dependence. For our calculation we chose several different values of the coupling, $`\beta `$, which correspond to spatial lattice spacings between 0.15 fm and 0.47 fm. On even coarser lattices we cannot expect to control discretisation errors with our simple-minded approach, while much finer lattices will violate the validity of NRQCD which requires $`a_sM>1`$. From the quark propagator, $`G_t`$, we construct meson correlators for bound states with spin $`S=(0,1)`$ and orbital angular momentum $`L=(0,1,2)`$. Magnetic hybrid states are constructed from the colour-magnetic field coupled to a $`Q\overline{Q}`$-pair ($`B_i=[\mathrm{\Delta }_j,\mathrm{\Delta }_k]`$). For example, the spin-singlet operators read $$\overline{Q}^{}Q,\overline{Q}^{}\mathrm{\Delta }_iQ,\overline{Q}^{}\mathrm{\Delta }_j\mathrm{\Delta }_kQ\text{ and }\overline{Q}^{}B_iQ.$$ (3) Those simple operators can be further improved upon in order to optimise their overlap with the states of interest. Here we extract the excitation energies from multi-exponential fits to several different correlators. Within the NRQCD approach one cannot extrapolate to the continuum limit and it is paramount to establish a scaling region for physical quantities already at finite lattice spacing. In a previous study we found such scaling windows for the spin-averaged gluon excitations in both charmonium and bottomonium . These results are particularly encouraging as they are in excellent agreement with calculations on isotropic lattices , but with much smaller errors. In addition we were able to check our predictions against possible systematic errors such as finite volume effects. This is a natural concern since hybrids are expected to be rather large owing to the flat potentials they are living in. In Figure 1 we show our results from lattices all larger than 1.2 fm in extent, beyond which we could not resolve any volume dependence. For the Bottomonium hybrid we also have consistent results from two different anisotropies ($`\xi =3,5`$) which confirms our initial assumption of small temporal lattice spacing artefacts. In fact, on our lattices the tadpole coefficient $`u_{0t}`$ deviates from its continuum value by only about 5% or less. It is also interesting to notice the presence of scaling violations which are clearly visible if the lattices are too coarse for the physical system. We have now extended our analysis to study also higher radial excitations and D-states with $`L=2`$. The spin-independent results are summarised in Figure 2. The possibility to resolve all these excitations reliably should be considered the main success of anisotropic lattices. We are presently performing a finite volume analysis of the higher radial excitations. However, with the newly achieved accuracy we can also study spin-splittings in more detail. ## 3 SPIN STRUCTURE Our inclusion of relativistic corrections to Equation (2) is a significant improvement over previous NRQCD calculations of hybrid states, which were restricted to only leading order in the velocity expansion: $`𝒪(mv^2)`$. At this level there are no spin-dependent operators and we have a strict degeneracy of all singlet and triplet states. Within the NRQCD framework, the spin-dependent operators appear first as higher order corrections to the Hamiltonan: $`𝒪(mv^4)`$. This is in accordance with the experimental observation that spin-splittings in quarkonia are suppressed by $`v^2`$ compared to the spin-independent structure discussed in the previous section. Here we also include spin correction terms up to $`𝒪(mv^6)`$, and study the breaking of the degeneracy. In particular, we could directly observe the exotic hybrid, $`1^+`$, which is the state of greatest phenomenological interest. Our results for fine structure and hyperfine splittings are shown in Figure 3 and 4, respectively. There are several interesting observations one can make. First of all, we find a noticeable reduction of the fine structure in D-states when compared to that of P-states. Similarly, the hyperfine splitting between spin-singlet and spin-triplet states is equally suppressed as the orbital angular momentum is increased. This is in accordance with potential models which predict a hyperfine splitting for only (L=0)-states since all other wavefunctions vanish at the origin. Our data also indicates that the fine structure in hybrid states is enlarged compared to the splittings in P-states, but we could not yet resolve any splitting between the spin-triplet state ($`{}_{}{}^{3}B=5{}_{}{}^{3}B_{2}^{}+3{}_{}{}^{3}B_{1}^{}+{}_{}{}^{3}B_{0}^{}`$) and the singlet $`{}_{}{}^{1}B_{1}^{}`$. Finally one should also notice that the scaling behaviour of the spin structure is more involved than that of the spin-independent spectrum. This is not surprising since we have adopted a very simplicistic approach to determine all the coefficients in NRQCD with a single prescription (tree-level tadpole improvment). Namely the hyperfine splitting $`{}_{}{}^{3}S_{1}^{}{}_{}{}^{1}S_{0}^{}`$ does not scale on the lattices considered here. It is apparent that one needs a better improvement description to account for lattice spacing artefacts in such UV-sensitive quantities. In conclusion, we find that coarse and anisotropic lattices are extremely useful for precision measurements of higher excited states. This is due to an improved resolution in the temporal direction and the possibility to generate large ensembles of gauge field configurations at small computational cost. It has lead to an unprecedented control over statistical and systematic errors in lattice studies of heavy quarkonia. After the inclusion of relativistic corrections we are also sensitive to spin-spin and spin-orbit interactions and could observe a clear hierachy in the spin structure, depending on the orbital angular momentum. Spin splittings in hybrid states were found to be larger as a result of the gluon angular momentum to which the spin can couple. The remaining and dominating systematic error for all our predicitions is an uncertainty in the scale as the result of the quenched approximation. This is not yet controlled and we find a variation of 10-20%, depending on which experimental quantity is used to set the scale. This work is supported by the Research for the Future Programme of the JSPS.
no-problem/9909/astro-ph9909011.html
ar5iv
text
# Multi-Frequency Study of the B3-VLA Sample11footnote 1Table 2 is also available in electronic from at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/Abstract.html II. The Database ## 1 Introduction Homogeneous databases over a wide frequency range for a large sample of radio sources with intermediate or low flux densities are an important ingredient to modern astrophysics. We have therefore embarked on a project to obtain flux densities for the B3-VLA sample (Vigotti et al. 1989) over a frequency range as wide as possible. The aim is to study the spectral properties of a complete sample of radio sources. The B3-VLA sample is composed of sources that are roughly equally distributed in five flux density intervals, i.e. 50 times fainter than the 3C survey (Bennett 1962). The sample now contains 1049 radio sources, instead of 1050 listed in the previous papers dealing with the B3-VLA sample: the source 2302+396 which was already indicated as a possible spurious source close to a grating ring in the B3 catalogue (Ficarra et al. 1985) was deleted from the list since it was recognized as a CLEAN artifact. In fact it could not be found neither in the WENSS nor in the NVSS catalogue. This paper is the second of a series describing the multifrequency properties of the B3-VLA sample. In the first paper we presented the radio continuum data at 10.6 GHz obtained with the Effelsberg radio telescope (Gregorini et al. 1998, hereafter Paper I). We detected 99$`\%`$ of the radio sources, with a typical flux density error of about 1 mJy for the fainter ones. Here we present the spectral database of the whole sample consisting of flux densities at 151 MHz, 327 MHz, 408 MHz, 1.4 GHz, 4.85 GHz, and 10.6 GHz. Additional observations were performed for 478 sources at 4.85 GHz, which were necessary to complete the information at this frequency and to measure also at 4.85 GHz the polarization detected at 10.6 GHz. Sect. 2 describes the observations and data reduction at 4.85 GHz. In Sect. 3 we present the database, with an accurate description of the method used to obtain the flux densities and the errors at each frequency. In Sect. 4 the data table is presented with a discussion of the data quality. ## 2 Observations at 4.85 GHz The observations reported here have been carried out between July 1994 and March 1999. Until August 1995 the old $`\lambda `$6-cm correlation receiver system was employed. This system had two feeds in the secondary focus of the 100-m telescope. The right-hand circular polarization outputs from each feed (obtained after the polarizers in the waveguides) were correlated via a 3-dB hybrid to yield a differential total-power signal of the two feeds. This double–beam ensured minimal atmospheric disturbance to the signal. Amplification in the first stage was achieved with cooled FET’s. The main horn was connected to an IF-polarimeter to deliver the Stokes U and Q parameters for full linear polarization information. The system operated at a centre frequency of 4.75 GHz, with a bandwidth of 500 MHz. The receiver system temperature was $``$70 K on the sky (zenith, clear sky). In August 1995 this receivers were replaced by two stable total-power systems, with HEMT amplifiers in the first stage. Here the differential signal is retrieved by subtracting the calibrated signals in the computer. Each of the two total-power receivers is connected to an IF-polarimeter. This system operates at 4.85 GHz, the bandwidth is 500 MHz. The receiver system temperature has been greatly improved to 30 K on the sky (zenith, clear sky). The half-power beam width was 147″ for the old and 143″ for the new receiver system and the beam throw was 8$`\stackrel{}{.}`$2 in both cases. The sources were observed by cross-scanning the telescope in right ascension and declination, with a scan length of 15′. The scanning speed was 30′/min., and the total number of scans was adjusted to the expected flux density of each source. Sources with angular extents significantly exceeding the beam size or exhibiting significant confusion in the cross-scans were mapped in the double-beam mode and subsequently restored to the equivalent single-beam images using the restoration algorithm of Emerson et al. (1979). The scan separation was 1′, and the map sizes adjusted such as to account for the source size and the beam separation. The total number of sources mapped this way is 6. Telescope pointing, focussing and polarimeter adjustments were regularly checked by cross-scanning the point sources NGC 7027, 3C 48, 3C 84, 3C 138, 3C 147, 3C 196, 3C 286 and 3C 295. The latter two sources served also as flux density calibrators. ## 3 Database In Tab. 1 we present the information available for the B3–VLA sample. Cols. 1 and 2 list the frequency and reference to the relevant paper, Col. 3 gives the percentage of sources for which the data is available. ### 3.1 151 MHz Data These flux densities were obtained by cross-correlating the 6C survey (Hales et al. 1988) with the B3-VLA sample. The search radius used was 100″, which corresponds to a combined 3-$`\sigma `$ error for the fainter sources. We do not expect any chance coincidences, owing to the low source density at 151 MHz (4.1 sources per square degree). The values quoted in Tab. 2 are the peak flux densities for sources with an angular extent $`<`$ 100″ (extents taken from Vigotti et al. 1989), and the integrated ones (listed in the 6C, Hales et al. 1988) for larger sources. The error in the same table is computed as a constant term of 40 mJy, plus a 5% contribution due to the uncertainty of the flux density scale. Since these data are on the flux scale of Roger et al. (1973, RBC), we used the spectral indices reported by these authors to calculate the flux density of their calibrator sources at 178 MHz. In this way we could compare the scale of Roger et al. (1973) with the one of Kellermann et al. (1969, KPW). The ratio between these two flux density scales is KPW/RBC = 0.96. Baars et al. (1977, BGPW) report a ratio of BGPW/KPW = 1.051. Thus, the ratio BGPW/RBC turns out to be 1.008; therefore no correction was applied at 151 MHz. ### 3.2 327 MHz Data For sources with an angular extent $`<`$ 50″ we cross-correlated the B3–VLA positions with the WENSS source list (Rengelink et al. 1997) using a window of 11″ in right ascension and 22″ in declination. For the more extended ones we used a window of 40″ in right ascension and 80″ in declination. The total area searched was 0.03 square degrees. The WENSS source density is about 21.3 per square degree so that the contamination by chance coincidences is negligible. For the flux density errors we used the formula given by Rengelink et al. (1997), with a noise contribution of 4.5 mJy (which is the average value in the B3–VLA area), plus 4% due to the calibration uncertainty $`\mathrm{\Delta }_{\mathrm{cal}}`$ . Sources with a complex structure (as marked in the WENSS catalogue; Rengelink et al. 1997) were inspected directly on the WENSS maps, and their flux densities computed with the AIPS task TVSTAT. For these sources the errors $`\mathrm{\Delta }`$S were computed as follows: $$\mathrm{\Delta }\mathrm{S}=\sqrt{(\mathrm{\Delta }_{\mathrm{cal}}\mathrm{S})^2+\sigma _\mathrm{l}^2\frac{\mathrm{A}_\mathrm{s}}{\mathrm{A}_\mathrm{b}}}$$ . Here $`\sigma _\mathrm{l}`$ is the local noise in the map, A<sub>s</sub> the area covered by the radio source, and A<sub>b</sub> is the beam area. The flux densities in the WENSS survey are on the scale of Baars et al. (1977). ### 3.3 408 MHz Data The flux densities were taken from the B3 survey, except for extended sources for which an integrated flux density was used (Vigotti et al. 1989). For the computation of the errors we used 35 mJy as the constant term and 3$`\%`$ for the term proportional to the source flux density (Ficarra et al. 1985). The flux density scale of these data is based on 3C123, and agrees with the scale of Baars et al. (1977) to within 2$`\%`$. Therefore, no correction was applied. ### 3.4 1.4 GHz Data The flux densities were computed from the maps of the NRAO VLA SKY Survey (NVSS, Condon et al. 1998), centred on the B3-VLA positions using an automatic two-component Gaussian fit algorithm similar to the AIPS task JMFIT. For the unresolved sources the difference between our flux density and that listed in the NVSS catalogue is negligible ($`<`$ 2%). The errors were calculated with the formula of Condon et al. (1998), where the noise and confusion term is 0.45 mJy/beam and the calibration uncertainty is 3%. For the extended and complex sources the flux densities were computed using the AIPS task TVSTAT. Their errors were computed as above (Sect. 3.2). The flux densities are on the scale of Baars et al. (1977). ### 3.5 4.85 GHz Data All sources not available in the literature (Kulkarni et al. 1990, Gregory et al. 1996) have been observed as described in Sect. 2. The flux densities of Kulkarni et al. 1990, and those observed by us before August 1995 were shifted from 4.75 GHz to 4.85 GHz using the spectral index of the radio source. For the flux densities presented in this paper we adopted 1.0 mJy as the noise contribution, and 2% as the contribution proportional to the flux density. Another 0.45 mJy is added to account for source confusion (Reich 1993). For the data of Kulkarni et al. (1990) the errors are 2 mJy and 2%, respectively. The errors of the flux densities taken from Gregory et al. (1996) are listed in the GB6 catalogue. In 2 cases, 1412+397 and 2341+396B, the sources could not be separated from a closeby confusing source. We used the flux densities from our measurements (45.8%). In cases where those were not available the flux densities reported by Kulkarni et al. (1990; 42.4%) or Gregory et al. (1996; 11.8%) were taken. The GB6 maps of the sources with extension larger than 70″were downloaded using SkyView. In addition, the most extended ones (0136+396, 0157+405A, 0248+467, 0703+426A, 1141+374, and 1309+412A) were mapped in Effelsberg. In all cases the flux densities were determined with AIPS task TVSTAT and the errors were calculated as described above (Sect. 3.2). The flux densities are on the scale of Baars et al. (1977). ### 3.6 10.6 GHz Data In Tab. 2 we list the integrated flux densities as well as the errors computed using the formula presented in Paper I. Here, the noise term is 0.8 mJy, confusion contributes 0.08 mJy, and the term proportional to the flux is 2%. The flux densities are on the scale of Baars et al. (1977). ## 4 Discussion Table 2 presents the whole database. Col. 1 lists the B3-VLA name, Cols. 2 and 3 the radio centroid (equinox J2000.0) from Vigotti et al. (1989; computed as the geometric mean of the source components). The following 12 columns list the flux densities and errors at 151 MHz, 327 MHz, 408 MHz, 1.4 GHz, 4.85 GHz and 10.6 GHz, respectively (all in mJy). The last column contains the sources’ optical identifications, abbreviated as follows: g: radio galaxy identified on the POSS-I, most of which are z$`0.5`$; G: far radio galaxy with measured redshift (0.5$`\mathrm{z}`$ 3.5); Q: spectroscopically confirmed quasar; b: blue objects (i.e. non-confirmed quasars); BL: BL Lac; F: featureless spectrum; a blank means ‘empty field’, i.e. it lacks any optical counterpart down to the POSS-I limit (more that 90% are distant radio galaxies, the remaining ones being quasars with magnitudes fainter than the POSS-I). For 19 sources the 408 MHz data are not reported. In 15 cases the flux density is affected by a nearby strong source. In four cases the B3-VLA sources were not resolved by the 408 MHz beam. In order to complete the spectral database we observed 164 sources at 4.85 GHz whose flux densities were not available in the catalogues listed in Tab. 1; 314 sources with detected polarization at 10.6 GHz were re-observed at 4.85 GHz for future polarization studies. An analysis of the polarization data will be published in a forthcoming paper. In Fig. 1 we show the plot of our measurements versus the GB6 flux densities. Intrinsic source variability is likely to increase the scatter of the plot. For each source we computed two spectral indices: a low-frequency index $`\alpha _l`$ (0.3 – 1.4 GHz) and a high-frequency one, $`\alpha _h`$ (4.8 – 10.6 GHz). Fig. 2 shows the histograms of $`\alpha _l`$ and $`\alpha _h`$ (shaded) of 1034 sources, for which four flux densities are available. The resulting median values for the two distributions are $`\alpha _l=0.853`$ and $`\alpha _h=1.053`$ (S $`\nu ^\alpha `$). In Fig. 3 we show a radio colour-colour diagram illustrating the different population areas of radio galaxies and quasars. As already evident in Fig. 2, $`\alpha _h`$ covers a wider range of values (dispersion 0.40) than $`\alpha _l`$ (dispersion 0.23). This is to be expected if spectral steepening due to synchrotron and inverse Compton energy losses is important: it changes $`\alpha _h`$ first, before the sources have aged sufficiently such as to affect $`\alpha _l`$ as well. The corresponding evolutionary track in the $`\alpha _l`$$`\alpha _h`$ diagramme is that populated by the radio galaxies in Fig. 3: if these sources commence their lives with flat (injection) spectra, they should gradually move downward at a faster rate than leftward. Also evident in Fig. 3 is that radio galaxies (crosses) have on average steeper high-frequency spectra than quasars; in particular, the radio galaxies dominate the lowest part of the diagramme. Some sources (essentially quasars) exhibit extreme values, especially those with flat $`\alpha _l`$ and/or flat $`\alpha _h`$ (populating the upper and right-hand portion of the plot). These may possess self-absorbed components that become visible in different frequency regimes, depending on their optical thickness. A thorough analysis and interpretation of our results will be presented in a forthcoming paper. ###### Acknowledgements. We thank Helge Rottmann for his help during the Effelsberg observations. We are grateful to Dr. Heinz Andernach whose comments on the manuscript helped to improve the paper significantly. Part of this work was supported by the Deutsche Forschungsgemeinschaft, grant KL533/4-2, and by the European Commission, TMR Programme, Research Network Contract ERBFMRXCT97-0034 “CERES”. We thank the Italian Ministry for University and Scientic Research (MURST) for partial financial support (grant Cofin98-02-32). We acknowledge the use NASA’s SkyView facility (http://skyview.gsfc.nasa.gov) located at NASA Goddard Space Flight Center.
no-problem/9909/cond-mat9909324.html
ar5iv
text
# Deterministic Equations of Motion and Phase Ordering Dynamics ## Abstract We numerically solve microscopic deterministic equations of motion for the 2D $`\varphi ^4`$ theory with random initial states. Phase ordering dynamics is investigated. Dynamic scaling is found and it is dominated by a fixed point corresponding to the minimum energy of random initial states. In recent years microscopic deterministic equations of motion (e.g. Newton, Hamiltonian and Heisenberg equations) have attracted much attention of scientists in different areas. From fundamental view points, solutions of deterministic equations may describe both equilibrium and non-equilibrium properties of statistical systems, even though a general proof does not exist, e.g. see Refs . Ensemble theories and stochastic equations of motion are effective description of static and dynamic properties of the statistical systems respectively. With recent development of computers, it becomes possible to solve deterministic equations numerically. For example, recently attempt has been made for the $`O(N)`$ vector model and $`XY`$ model . The results support that deterministic equations correctly describe second order phase transitions. The estimated static critical exponents are consistent with those calculated from canonical ensembles. More interestingly, the macroscopic short-time (non-equilibrium) dynamic behavior of the 2D $`\varphi ^4`$ theory at criticality has also been investigated and dynamic scaling is found . The results indicate that deterministic dynamics with random initial states is in a same universality class of Monte Carlo dynamics of model A. On the other hand, phase ordering dynamics has been investigated for years . It concerns how a statistical system evolves into an ordered phase after a quench from a disordered phase. For example, the Ising model initially at a very high temperature $`T_I`$ is suddenly quenched to a temperature $`T_F`$ well below the critical temperature $`T_C`$, and then evolves dynamically. Because of the competition of the two ordered phases, it is well known that the equilibration is very slow. Investigation reveals that in the late stage (in microscopic sense) of the dynamic evolution there emerges scaling behavior, which is somehow universal. The scaling behavior is dominated by the fixed point $`(T_I,T_F)=(\mathrm{},0)`$ and away from the fixed point there are corrections to scaling. Up to now, for simple systems stochastic dynamics described by Langevin-type equations or Monte Carlo algorithms has been studied. Scaling behavior of ordering dynamics depends essentially on whether the order parameter is conserved (model B) or not (model A). For the Ising model (or $`\varphi ^4`$ theory), the dynamic exponent is $`z=2`$ for model A and $`z=3`$ for model B . The purpose of this paper is to study the phase ordering dynamics with the microscopic deterministic equations of motion, taking the 2D $`\varphi ^4`$ theory as an example. Following Refs. we consider an isolated system. The Hamiltonian of the 2D $`\varphi ^4`$ theory on a square lattice is $$H=\underset{i}{}\left[\frac{1}{2}\pi _i^2+\frac{1}{2}\underset{\mu }{}(\varphi _{i+\mu }\varphi _i)^2\frac{1}{2}m^2\varphi _i^2+\frac{1}{4!}g\varphi _i^4\right]$$ (1) with $`\pi _i=\dot{\varphi }_i`$ and it leads to the equations of motion $$\ddot{\varphi }_i=\underset{\mu }{}(\varphi _{i+\mu }+\varphi _{i\mu }2\varphi _i)+m^2\varphi _i\frac{1}{3!}g\varphi _i^3.$$ (2) Energy is conserved in these equations. Solutions in the long-time regime are assumed to generate a microcanonical ensemble. The temperature could be defined as the averaged kinetic energy. For the dynamic system, however, the total energy is an even more convenient controlling parameter of the system, since it is conserved and can be input from initial states. For given parameters $`m^2`$ and $`g`$, there exists a critical energy density $`ϵ_c`$, separating the ordered phase (below $`ϵ_c`$) and disordered phase (above $`ϵ_c`$). The phase transition is of the second order. The order parameter of the $`\varphi ^4`$ theory is the magnetization. The time-dependent magnetization $`MM^{(1)}(t)`$ and its second moment $`M^{(2)}`$ are defined as $$M^{(k)}(t)=\frac{1}{L^{2k}}\left[\underset{i}{}\varphi _i(t)\right]^{(k)},k=1,2.$$ (3) The average is over initial configurations and $`L`$ is the lattice size. Following ordering dynamics with stochastic equations, we consider a dynamic process that the system initially in a disordered state but with energy density well below $`ϵ_c`$ is suddenly released to evolve according to Eq. (2). For simplicity, we set initial kinetic energy to zero, i.e. $`\dot{\varphi }_i(0)=0`$. To generate a random initial configuration $`\{\varphi _i(0)\}`$, we first fix the magnitude $`|\varphi _i(0)|c`$, then randomly give the sign to $`\varphi _i(0)`$ with the restriction of a fixed magnetization in unit of $`c`$, and finally the constant $`c`$ is determined by the given energy. We could also give a distribution for $`|\varphi _i(0)|`$ but the difference will only be corrections to scaling. In case of stochastic dynamics, scaling behavior of phase ordering is dominated by the fixed point $`(T_I,T_F)=(\mathrm{},0)`$. In deterministic dynamics, energy density can not be taken to the real minimum $`e_{min}=3m^4/2g`$ since the system does not move. Actually, for the initial states described above, the energy is given by $$V=\underset{i}{}\left[(d\frac{1}{2}m^2)\varphi _i^2+\frac{1}{4!}g\varphi _i^4\right].$$ (4) Here $`d`$ is the spatial dimension. The conjecture is that the scaling behavior is dominated by the minimum energy density $`v_{min}=V_{min}/L^2`$, which is a kind of fixed points. In this paper, we consider the case of $`d<m^2/2`$. Then $`v_{min}=6(dm^2/2)^2/g`$. From now, we redefine the energy density $`e_{min}`$ as zero. Then the fixed point is $`ϵ_0=v_{min}e_{min}`$. To solve the equations of motion (2) numerically, we discretize $`\ddot{\varphi }_i`$ by $`(\varphi _i(t+\mathrm{\Delta }t)+\varphi _i(t\mathrm{\Delta }t)2\varphi _i(t))/(\mathrm{\Delta }t)^2`$. After an initial configuration is prepared, we update the equations of motion until $`t=650`$ or $`1000`$. Then we repeat the procedure with other initial configurations. From the experience in Ref. , $`\mathrm{\Delta }t=0.05`$ is small enough for our updating times. In our calculations, we use mainly a lattice size $`L=512`$ and samples of initial configurations for average are $`200`$. Some simulations have also been performed for $`L=1024`$ with $`50`$ samples to estimate the finite size effect. An important observable is the equal-time correlation function $$C(r,t)=\frac{1}{L^2}\underset{i}{}\varphi _i(t)\varphi _{i+r}(t).$$ (5) Here the lattice site $`i+r`$ is away from $`i`$ with a distance $`r`$. The scaling hypothesis is that at the late stage of the time evolution, $`C(r,t)`$ obeys a scaling form $$C(r,t)=f(r/t^{1/z}),$$ (6) where $`z`$ is the so-called dynamic exponent and the initial magnetization $`m_0=0`$. For stochastic dynamics, this scaling form is valid for all temperatures well below the critical temperature. Monte Carlo simulations, e.g. for the Ising model, actually show that at the fixed point $`(T_I,T_F)=(\mathrm{},0)`$ the scaling behavior often emerges at a relatively early time $`t`$ in the macroscopic sense , after a time scale $`t_{mic}`$ which is large enough in the microscopic sense. Away from the fixed point, there are corrections to scaling. For deterministic dynamics, we expect that the minimum energy density of the random initial states $`ϵ_0=v_{min}e_{min}`$ plays a similar role. Another interesting observable is the auto-correlation function $$A(t)=\frac{1}{L^2}\underset{i}{}\varphi _i(0)\varphi _i(t).$$ (7) The scaling hypothesis for the auto-correlation $`A(t)`$ is a power law behavior $$A(t)t^{\lambda /z}.$$ (8) It implies a divergent correlation time and ordering dynamics is in some sense ‘critical’. Here $`\lambda `$ is another independent exponent. We have carried out computations with a lattice size $`L=512`$ for parameters $`(m^2,g)=(6.0,1.8)`$, $`(6.0,5.4)`$ and $`(8.0,2.4)`$ at the fixed point $`ϵ_0`$. For $`(m^2,g)=(6.0,1.8)`$, extra simulations with energy density $`ϵ=ϵ_0+4/3`$ and at $`ϵ_0`$ with a large lattice $`L=1024`$ have been performed. The auto-correlation has been plotted in Fig. 1. The curve for $`(m^2,g)=(6.0,1.8)`$ with $`L=1024`$ (not in the figure) overlaps with that for $`L=512`$. In the figure, we see clearly a nice power law behavior after $`t_{mic}50100`$. The dashed line is for $`(m^2,g)=(6.0,1.8)`$ with energy density $`ϵ=ϵ_0+4/3`$ and correction to scaling is still not so big. All curves have nearly the same slope indicate a kind of universality and the fixed point plays an important role. As is the case of the Ising model with Monte Carlo dynamics , there is a small curvature in the curves, but upwards. This gives rise to about one or two percent difference of the slope depending on the measured time interval. Slopes for different curves have also a comparable uncertainty. Taking into account all these factors and statistical errors, we estimate the exponent $`\lambda /z=0.460(10)`$. In Fig. 2, the equal-time correlation function $`C(r,t)`$ is displayed. The curves are for $`(m^2,g)=(6.0,1.8)`$ with $`L=1024`$ and one sees clear self-similarity during time evolution. According to the scaling form (6), data for different time t should collapse if $`r`$ is suitably rescaled by $`t^{1/z}`$. In other words, searching for the best collapse of the data we can obtain the dynamic exponent $`z`$. This collapse of the data is shown on the first curve from the left. All data points locate nicely on a curve except for a small departure for $`t=20`$. The corresponding dynamic exponent measured from a time interval $`[40,640]`$ is $`z=2.69(9)`$. In Table I, we list values of $`z`$ for different parameters and measured in different time intervals. Again, for larger time $`t`$ the dynamic exponent $`z`$ tends to be slightly smaller. We believe the small deviation for different parameters $`(m^2,g)`$ is more or less due to uncontrolled systematic errors or/and possible corrections to scaling. From the table, we estimate the dynamic exponent $`z=2.65(10)`$. This is significantly different from $`z=2.0`$ for the Ising model with stochastic dynamics of model A. Very interesting is that the scaling function $`f(x)`$ in Eq. (6) for the $`\varphi ^4`$ theory is the same as that of the Ising model with Monte Carlo dynamics of model A , even though the exponent $`z`$ is different. This is shown on the last curve from left in Fig. 2. To plot the functions, $`r`$ and $`C(r,t)`$ have been suitably rescaled by constants. We did not try to get a ‘best’ fit to all the data but only to show they are indeed a same function. For the data of $`(m^2,g)=(6.0,5.4)`$ ($`\times `$) and $`(6.0,5.4)`$ ($``$), only $`r`$ is rescaled. For the Ising model (full diamonds), the rescaling factor for $`r`$ happens to be $`1/2`$. A simple understanding of the scaling behavior of $`C(r,t)`$ can be achieved from the second moment of the magnetization. Integrating over $`r`$ in Eq. (6), we obtain a power law behavior $$M^{(2)}(t)t^{d/z}.$$ (9) This is shown also in Fig. 1. Even though there are some visible fluctuations, power law behavior is observed. From slopes of the curves after $`t100`$, we measure the exponent $`d/z=0.76(3)`$. Then we estimate the dynamic exponent $`z=2.63(10)`$. For discussions above, the initial magnetization $`m_0`$ is zero. If $`m_0`$ is a non-zero, the system reaches a unique ordered state within a finite time. If $`m_0`$ is infinitesimal small, however, the time for reaching the ordered state is also infinite and scaling behavior can still be expected, at least at relatively early times (in macroscopic sense). In this case, an interesting observable is the magnetization itself and at early times it increases by a power law $$M(t)t^\theta ,\theta =(d\lambda )/z.$$ (10) The exponent $`\theta `$ can be written as $`x_0/z`$, with $`x_0`$ being the scaling dimension of $`m_0`$. This power law behavior has deeply been investigated in critical dynamics . In Fig. 3, the initial increase of the magnetization is shown. After $`t_{mic}80`$, nice power law behavior is seen. To avoid finite $`m_0`$ effect, very small values of $`m_0`$ have been chosen. The resulting exponent $`\theta `$ is $`0.308(9)`$ and $`0.315(30)`$ for $`m_0=0.0078`$ and $`0.0052`$ respectively. Taking into account the errors, we consider $`\theta =0.308(9)`$ as the final result. With $`\theta `$ and $`\lambda /z`$ in hand, from the scaling relation $`\theta =(d\lambda )/z`$ again we can calculate the dynamic exponent $`z=2.60(5)`$. In Table II, we have summarized all the measurements of the exponents. The agreement of different measurements of $`z`$ strongly supports the dynamic scaling hypothesis. The exponents of the Ising model with stochastic dynamics of model A are from theoretical calculations . In Monte Carlo simulations, there may be small deviation . It is interesting that the dynamic exponent $`z`$ for the $`\varphi ^4`$ theory with deterministic dynamics is clearly different from that of the Ising model with stochastic dynamics but the exponent $`\lambda `$ looks the same. In Refs. , we know that in dynamic critical phenomena, deterministic dynamics for the 2D $`\varphi ^4`$ theory and stochastic dynamics of model A for the Ising model are in a same universality class. Why is it not the case in ordering dynamics? This may be traced back to the energy conservation in our deterministic equations. Since energy couples to the order parameter, deterministic dynamics is somehow believed to be a realization of model C . For critical dynamics, in two-dimensions model A and model C are the same. For ordering dynamics, however, model A and model C can be different. It is pointed out in Ref. that in many cases real physical systems may be intermediate between model A and C. When $`dm^2/2`$ becomes positive, $`v_{min}`$ moves to zero. This is an unnormal fixed point ($`\varphi _i(0)0`$), from which the system can not move. Around this fixed point, self-similarity is also observed in time evolution, but a simple scaling form as Eq. (6) does not give good collapse of the data, at least up to the time $`t=650`$. Further understanding remains open. In conclusions, we have investigated ordering dynamics governed by deterministic equations of motion, taking the 2D $`\varphi ^4`$ theory as an example. Scaling behavior is found and it is dominated by the fixed point corresponding to the minimum energy of random initial states. The dynamic exponent $`z`$ is different from that of stochastic dynamics of model A, while the scaling function for the equal-time correlation $`C(r,t)`$ is the same. Deterministic dynamics with energy conservation might be a realization of model C. Acknowledgements: Work supported in part by the Deutsche Forschungsgemeinschaft under the project TR 300/3-1.
no-problem/9909/hep-lat9909079.html
ar5iv
text
# Landau-Ginsberg Theory of Quark Confinement ## 1 Theory We take the free energy density $`f`$ of the gluon plasma to be a function of the temperature $`T`$and the fundamental representation Polyakov loop $`P`$. The theory also depends on a renormalization group invariant scale-setting parameter $`\mathrm{\Lambda }`$. Perturbation theory gives a free energy $`f`$ of the form $`T^4f_4(P,g(T/\mathrm{\Lambda }))`$. Perturbation theory does not describe $`f`$ near the deconfining transition, but is probably adequate for $`T`$ much greater than the deconfinement temperature $`T_d`$. Subleading terms, of the form $`T^{4r}\mathrm{\Lambda }^rf_r(P,g(T/\mathrm{\Lambda }))`$ , are likely needed to describe the deconfining transition. Such terms are inherently non-perturbative, due to the appearance of the factor $`\mathrm{\Lambda }^r`$. It is easy to show that $`\mathrm{\Delta }\epsilon 3p`$is given by $$\mathrm{\Delta }=\left[4T_T\right]f$$ and therefore contains information about the subleading terms. Note that $`\mathrm{\Delta }`$ is also directly related to the finite temperature contribution to the stress-energy tensor anomaly, which depends in a non-trivial way on the Polyakov loop. Given the close connection between $`\mathrm{\Delta }`$ and the subleading terms in $`f`$ which drive the deconfinement transition, it is natural to examine the behavior of $`\mathrm{\Delta }(T)`$ near $`T_d`$ as measured in simulations. Using the data of Boyd et al for $`SU(3)`$lattice gauge theory, we find that $`\mathrm{\Delta }(T)T^2`$ over a large range of temperatures above $`T_d`$. This suggests that a term in $`f`$ proportional to $`T^2`$ plays an important role in the deconfinement transition. It is also necessary to have a term proportional to $`T^0`$ (independent of $`T`$), so that there is a non-zero free energy density difference between confined and deconfined phases at very low temperatures. Thus we conjecture the simple form for the free energy $$f(T,P)=T^4f_4(P)+T^2\mathrm{\Lambda }^2f_2(P)+\mathrm{\Lambda }^4f_0(P)$$ where $`f_0`$ must favor the confined phase to yield confinement at arbitrarily low temperatures. We look at the one loop perturbative result for guidance on the possible forms for $`f_r`$. We define the eigenvalues $`q_j`$ by diagonalizing the fundamental representation Polyakov loop $`P`$: $`P_{jk}=exp\left[i\pi q_j\right]\delta _{jk}`$. The free energy for gluons in a constant $`A_0`$ background is : $`f_g(q)`$ $`=`$ $`{\displaystyle \frac{2}{\beta }}Tr_A{\displaystyle \frac{d^3k}{(2\pi )^3}\mathrm{ln}\left[1e^{\beta \omega _k}P\right]}`$ $`=`$ $`{\displaystyle \frac{2}{\beta }}Tr_A{\displaystyle \frac{d^3k}{(2\pi )^3}\underset{n=1}{\overset{\mathrm{}}{}}\frac{1}{n}e^{n\beta \omega _k}P^n}`$ $`=`$ $`{\displaystyle \frac{2\pi ^2T^4}{3}}{\displaystyle \underset{j,k=1}{\overset{N}{}}}(1{\displaystyle \frac{1}{N}}\delta _{jk})B_4\left({\displaystyle \frac{\left|\mathrm{\Delta }q_{jk}\right|_2}{2}}\right)`$ where $`\left|\mathrm{\Delta }q_{jk}\right|_2\left(q_jq_k\right)mod(2)`$ and $`B_4`$ is the fourth Bernoulli polynomial, given by $`B_4(x)=x^42x^3+x^2\frac{1}{30}`$. The free energy is a sum of terms, each of which represents field configurations in which a net number of $`n`$ gluons go around space-time in the Euclidean time direction. ## 2 Simple Model In imitation of perturbation theory, we use the Bernoulli polynomial to construct $`f_0`$, $`f_2`$ and $`f_4`$ as polynomials in the $`q`$variables with the appropriate symmetries. There are two inequivalent directions in the Cartan subalgebra of $`SU(3)`$, $`\lambda _3`$ and $`\lambda _8`$. Confinement is achieved by motion in the $`\lambda _3`$ direction away from $`A_0=0`$. Defining $`P_F=Tr_F(P)`$, we parametrize motion along the line $`ImP_F=0`$ by $$P_F(\psi )=1+2\mathrm{cos}\left[2\pi \left(1\psi \right)/3\right]$$ with $`\psi =0`$ giving $`P_F=0`$, and $`\psi =1`$ corresponding to $`A_0=0`$ and $`P_F=3`$. We take the free energy density to have the form $$f(\psi ,T)=aT^4\left(\psi ^4\frac{2}{3}\psi ^3+\psi ^2\right)+\left(b+cT^2\right)\psi ^2$$ where $`a=4\pi ^2/15`$; $`b`$ and $`c`$ fix the critical properties. This potential can be extended to the entire Lie algebra, and contains all required symmetries. For low temperatures, the $`b\psi ^2`$ term dominates. If $`b>0`$, the system will be confined. The parameter $`b`$can be interpreted as the free energy difference at $`T=0`$ between the $`\psi =0`$ confined phase and the fully deconfined $`\psi =1`$ phase. ## 3 Results The above potential has built in the correct low and high temperature behavior, and has two free parameters, $`b`$and $`c`$. We can use one of these to set the overall scale by fixing the deconfinement temperature. To determine the remaining parameter, we fit the lattice data for $`\mathrm{\Delta }`$ at $`N_t=8`$, which is well measured and a good approximation to the continuum limit. With $`T_d=0.272GeV`$, we obtain $`b^{1/4}=0.356GeV`$ and $`c^{1/2}=0.313GeV`$. The results of our fitting procedure are shown in figures 1, 2 and 3 for $`\mathrm{\Delta }`$, $`p`$ and $`\epsilon `$. The agreement is good throughout the range $`T_d4T_d`$. The discrepancy in the high-temperature behavior of $`p`$ and $`\epsilon `$ is probably accounted for by HTL-improved perturbation theory. ## 4 Extended Model The physical origin of the parameters $`b`$and $`c`$ above is obscure. Since the trace of the stress-energy tensor $`\theta _\mu ^\mu `$couples to the scalar glueball, we introduce a scalar glueball field $`\varphi `$ as the source of scale symmetry breaking in an extended model. For $`SU(3)`$, our extended model is $`f`$ $`=`$ $`aT^4\left(\psi ^4{\displaystyle \frac{2}{3}}+\psi ^2\right)+\left(\alpha \varphi ^4+\beta \varphi ^2T^2\right)\psi ^2`$ $`+\lambda \varphi ^4\mathrm{log}\left({\displaystyle \frac{\varphi ^2}{e^{1/2}\mu ^2}}\right).`$ Spontaneous symmetry breaking of $`\varphi `$ via a Coleman-Weinberg potential introduces the scale $`\mu `$. If we make the identification $`\varphi ^4Tr\left(F_{\mu \nu }^2\right)`$, the $`T=0`$ potential for $`\varphi `$ can be derived in a variety of ways: 1) renormalization group ; 2) explicit calculation for constant fields; 3) stress-energy tensor anomaly ; 4) stress-energy sum rules . The values of $`\lambda `$ and $`\mu `$ can be determined from the values of the gluon condensate and the glueball mass. The $`T=0`$ condensate from the Coleman-Weinberg potential is given by $`2\lambda \varphi ^42\lambda \mu ^4`$and the glueball mass is given by $`M_s^2=8\lambda \mu ^2`$. A similar glueball potential has been used to model the chiral transition . A coupling between $`\varphi `$ and $`P`$ can be inferred from perturbation theory ; similar couplings to the chiral order parameter exist . We have found values for the parameters $`\alpha `$, $`\beta `$, $`\lambda `$and $`\mu `$which mimic the behavior of our simpler model near $`T_d`$. Our extended model has a potentially fatal problem associated with the restoration of scale symmetry. For plausible values of the gluon condensate and glueball mass, restoration of scale symmetry at $`T_d`$ leads to a single abrupt phase transition incompatible with lattice data. The alternative, with unrealistic values, is restoration above $`T_d`$ via a first order transition, which would be observable in lattice data. This argues against any simple role of the glueball in the thermodynamics of the gluon plasma.
no-problem/9909/chao-dyn9909013.html
ar5iv
text
# Quantum noise-induced chaotic oscillations ## Abstract We examine the weak quantum noise limit of Wigner equation for phase space distribution functions. It has been shown that the leading order quantum noise described in terms of an auxiliary Hamiltonian manifests itself as an additional fluctuational degree of freedom which may induce chaotic and regular oscillations in a nonlinear oscillator. The absence of any direct counterpart to classical trajectories in phase space in quantum theory poses a special problem in nonlinear dynamical system from the point of view of quantum-classical correspondence . As an essential step towards understanding quantum systems a number of semiquantum methods, via WKB approximation, Ehrenfest theorem or mean field approximation as well as some exact calculations etc. have been proposed and investigated over the years . A particularly noteworthy case concerns a system that seems to be classically integrable but not in the quantum case due to tunneling. In the present paper we examine a related issue, i. e, the weak quantum noise limit of Wigner equation for phase space distribution functions and show that it is possible to describe the quantum fluctuations of the system in terms of an auxiliary degree of freedom within an effective Hamiltonian formalism. This allows us to demonstrate an interesting quantum noise-induced chaotic and regular behaviour in a driven double-well oscillator. To start with we consider a one-degree-of-freedom system described by the Hamiltonian equation of motion ; $`\dot{x}`$ $`=`$ $`{\displaystyle \frac{H}{p}}=p`$ (1) $`\dot{p}`$ $`=`$ $`{\displaystyle \frac{H}{x}}=V^{}(x,t)`$ (2) where $`x`$ and $`p`$ are the co-ordinate and momentum variables for the system described by the Hamiltonian $`H(x,p,t)`$. $`V(x,t)`$ refers to the potential of the system. The reversible Liouville dynamics corresponding to Eq.(1) is given by $$\frac{\rho }{t}=p\frac{\rho }{x}+V^{}(x,t)\frac{\rho }{p}$$ (3) Here $`\rho (x,p,t)`$ is the classical phase space distribution function. For a quantum-mechanical system, however, $`x`$, $`p`$ are not simultaneous observables because they become operators which obey Heisenberg uncertainty relation. The quantum analog of classical phase space distribution function $`\rho `$ corresponds to Wigner phase space function $`W(x,p,t)`$ ; $`x`$, $`p`$ now being the c-number variables. $`W`$ is given by Wigner equation ; $`{\displaystyle \frac{W}{t}}`$ $`=`$ $`p{\displaystyle \frac{W}{x}}+V^{}(x,t){\displaystyle \frac{W}{p}}`$ (5) $`+{\displaystyle \underset{n1}{}}{\displaystyle \frac{\mathrm{}^{2n}(1)^n}{2^{2n}(2n+1)!}}{\displaystyle \frac{^{2n+1}V}{x^{2n+1}}}{\displaystyle \frac{^{2n+1}W}{p^{2n+1}}}.`$ The third term in Eq.(3) corresponds to quantum correction to classical Liouville dynamics. Our aim in this report is to explore an auxiliary Hamiltonian description corresponding to Eq.(3) in the semiclassical limit $`\mathrm{}0`$. To put this in an appropriate context let us bring forth below an analogy with an observation on a weak thermal noise limit of overdamped Brownian motion of a particle in a force field. In that significant analysis, Luchinsky and McClintock have studied the large fluctuations (of the order $`>>\sqrt{D},D`$ being the diffusion coefficient) of the dynamical variables $`\stackrel{}{x}`$ away from and return to the stable state of the system with a clear demonstration of detailed balance. The physical situation is governed by the standard Fokker-Planck equation for probability density $`P_c(\stackrel{}{x},t)`$, $$\frac{P_c(\stackrel{}{x},t)}{t}=\stackrel{}{}\stackrel{}{K}(\stackrel{}{x},t)P_c(\stackrel{}{x},t)+\frac{D}{2}^2P_c(\stackrel{}{x},t),$$ (6) where $`\stackrel{}{K}(\stackrel{}{x},t)`$ denotes the force field. In the weak noise limit $`D`$ is considered to be a smallness parameter such that in the limit $`D`$ small, $`P_c(\stackrel{}{x},t)`$ can be described by a WKB-type approximation of the Fokker-Planck equation of the form $`P_c(\stackrel{}{x},t)=z(\stackrel{}{x},t)\mathrm{exp}(\frac{w(\stackrel{}{x},t)}{D})`$ . Here $`z(\stackrel{}{x},t)`$ is a prefactor and $`w(\stackrel{}{x},t)`$ is the classical action satisfying the Hamilton-Jacobi equation which can be solved by integration of an auxiliary Hamiltonian equation of motion $`\dot{\stackrel{}{x}}`$ $`=`$ $`\stackrel{}{p}+\stackrel{}{K},\dot{\stackrel{}{p}}={\displaystyle \frac{\stackrel{}{K}}{\stackrel{}{x}}}\stackrel{}{p}`$ (7) $`H_{aux}(\stackrel{}{x},\stackrel{}{p},t)`$ $`=`$ $`\stackrel{}{p}\stackrel{}{K}(\stackrel{}{x},t)+{\displaystyle \frac{1}{2}}\stackrel{}{p}\stackrel{}{p},\stackrel{}{p}=\stackrel{}{}w,`$ (8) where $`\stackrel{}{p}`$ is a momentum of the auxiliary system. The origin of this auxiliary momentum $`\stackrel{}{p}`$ is the fluctuations of the reservoir. In a thermally equilibrated system as emphasized by Luchinsky and McClintock , a typical large fluctuation of the variable $`\stackrel{}{x}`$ implies a temporary departure from its stable state $`\stackrel{}{x}_s`$ to some remote state $`\stackrel{}{x}_f`$ (in presence of $`\stackrel{}{p}`$) followed by a return to $`\stackrel{}{x}_s`$ as a result of relaxation in the absence of fluctuations $`\stackrel{}{p}`$ (i. e. , $`\stackrel{}{p}=0`$). Luchinsky and McClintock have studied these fluctuational and relaxational paths in analog electronic circuits and demonstrated the symmetry of growth and decay of classical fluctuations in equilibrium. We now return to the present problem and in analogy to weak thermal noise limit we look for the weak quantum noise limit of Eq.(3) by setting $`\mathrm{}0`$ with $`W(x,p,t)`$ described by a WKB type approximation of the form $$W(x,p,t)=W_0(x,t)\mathrm{exp}(\frac{s(x,p,t)}{\mathrm{}}).$$ (9) where $`W_0`$ is again a pre-exponential factor and $`s(x,p,t)`$ is the classical action function satisfying Hamilton-Jacobi equation which can be solved by integrating the following Hamilton’s equations $`\dot{x}`$ $`=`$ $`p`$ (10) $`\dot{X}`$ $`=`$ $`P`$ (11) $`\dot{p}`$ $`=`$ $`V^{}(x,t){\displaystyle \underset{n1}{}}{\displaystyle \frac{(1)^{3n+1}}{2^{2n}}}{\displaystyle \frac{1}{(2n)!}}{\displaystyle \frac{^{2n+1}V}{x^{2n+1}}}X^{2n}`$ (12) $`\dot{P}`$ $`=`$ $`V^{\prime \prime }(x,t)X{\displaystyle \underset{n1}{}}{\displaystyle \frac{(1)^{3n+1}}{2^{2n}(2n+1)!}}{\displaystyle \frac{^{2(n+1)}V}{x^{2(n+1)}}}X^{2n+1}`$ (13) with the auxiliary Hamiltonian $`H_{aux}`$ $$H_{aux}=pPV^{}(x,t)X+\underset{n1}{}\frac{(1)^{3n+1}X^{2n+1}}{2^{2n}(2n+1)!}\frac{^{2n+1}V}{x^{2n+1}}$$ (14) where we have defined the auxiliary co-ordinate $`X`$ and momentum $`P`$ as $$X=\frac{s}{p}\mathrm{and}P=\frac{s}{x}.$$ (15) The interpretation of the auxiliary variables $`X`$ and $`P`$ is now derivable from the analysis of Luchinsky and McClintock . The introduction of $`X`$ and $`P`$ in the dynamics implies the addition of a new degree of freedom into the classical system originally described by $`x,p`$. Since the auxiliary degree of freedom ($`X,P`$) owes its existence to the weak quantum noise, we must look for the influence of weak quantum fluctuations on the dynamics in the limit $`X0`$, $`P0`$, so that the Hamiltonian tends to be vanishing (since the $`X`$ and $`P`$ appear as multiplicative factors in the auxiliary Hamiltonian $`H_{aux}`$). It is therefore plausible that this vanishing Hamiltonian method captures the essential features of some generic quantum effect of the dynamics in classical terms in the weak quantum fluctuation limit. In what follows we shall be concerned with a quantum noise-induced barrier crossing dynamics - as a typical effect of this kind in a driven double-well system. Furthermore since the auxiliary Hamiltonian describes an effective two-degree-of-freedom system, the system, in general, by virtue of nonintegrability may admit chaotic behaviour. This allows us to study a dynamical system where one of the degrees of freedom is of quantum origin. Thus if the driven one degree-of-freedom is chaotic, the influence of the quantum fluctuational degree of freedom on it appears to be quite significant from the point of view of what may be termed as quantum chaos. We point out, in passing, that the Wigner function approach of somewhat different kind, has also been considered earlier by Zurek and others for the analysis of quantum decoherence problem in the context of quantum-classical correspondence. The testing ground of the above analysis is a driven double well oscillator characterized by the following Hamiltonian $`H`$ $`=`$ $`{\displaystyle \frac{p^2}{2}}+V(x,t),`$ (16) $`V(x,t)`$ $`=`$ $`ax^4bx^2+gxcos\mathrm{\Omega }t`$ (17) where $`a`$ and $`b`$ are the constants defining the potential. $`g`$ includes the effect of coupling with the oscillator with the external field with frequency $`\mathrm{\Omega }`$. The model described by (10) has been the standard paradigm for studying chaotic dynamics over the last few years . The equation of motion corresponding to auxiliary Hamiltonian $`H_{aux}`$ is given by $`\dot{x}`$ $`=`$ $`p`$ (18) $`\dot{X}`$ $`=`$ $`P`$ (19) $`\dot{p}`$ $`=`$ $`4ax^32bx+g\mathrm{cos}\mathrm{\Omega }t3axX^2`$ (20) $`\dot{P}`$ $`=`$ $`(12ax^22b)XaX^3`$ (21) In order to make our numerical analysis that follows consistent with this scheme of weak quantum noise limit it is necessary to consider limit of auxiliary Hamiltonian. To this end we fix the initial condition for the quantum noise degree of freedom $`P=0`$ and Lt $`X`$ very small for the entire analysis. The relevant parameters for the numerical study are $`a=0.5`$, $`b=10`$, $`g=10`$ and $`\mathrm{\Omega }=6.07`$. The results of numerical integration of Eq.(11) for the initial condition of the oscillator $`p=0`$, $`x=2.512`$ (along with $`P=0`$ and $`X=1.5\times 10^6`$) are shown in the Poincare plot (Fig. 1). What is apparent from a detailed follow-up of the system is that the system rapidly jumps back and forth between the two wells at irregular intervals of time resulting in a chaotic Poincare map spreaded over the two wells. This is in sharp contrast to what we observe in Fig. 2 on plotting the results of numerical integration of classical equations of motion corresponding to Eq.(1) and Hamiltonian (10) with the same initial condition $`p=0`$ and $`x=2.512`$. The system in this case resides in the four islands of the left well. It is thus immediately apparent that the quantum noise degree of freedom which imparts weak quantum fluctuations in the system through very small but nonzero $`X`$ induces a passage from left to right well and back. In Fig.3 we fix the initial condition at a different turning point $`p=0`$, $`x=2.509`$ and calculate the auxiliary Hamiltonian dynamics Eq.(11). It is interesting to observe that the noise strength is not sufficient to make the system move from the left well where it stays permanently by depicting a closed regular curve on the Poincare section. The quantum noise-induced barrier crossing dynamics from left to right well and back is illustrated in Figs.4(a-c). The initial condition for the oscillator used in this case is $`p=0`$, $`x=2.5093`$. The closed curve in Fig. 4(a) exhibits a snapshot of the confinement of the system (in the left well) upto the time $`t=nT`$ where $`n=1293`$ and $`T`$ is the time period of the external field ($`T=\frac{2\pi }{\mathrm{\Omega }}`$). The system then jumps to the right well to stay there for a period of time 2998 T. This is shown in Fig.4(b). The process goes on repeating for the next period of time $`2969T`$ when the system gets confined in the left well again. The back and forth quantum noise-induced oscillations between the two wells illustrate a regular dynamics in this case. In the absence of noise the classical system \[Eq.(1)\] remains localized in a specific well. In summary, we have shown that the leading order quantum noise in Wigner equation for phase space distribution functions results in an auxiliary Hamiltonian where the quantum noise manifests itself as an extra fluctuational degree of freedom. Depending on the initial conditions this may induce irregular or regular hopping between the two wells of a double-well oscillator. It is thus possible that a nonlinear system may sustain chaotic oscillations by quantum noise, even when its classical counterpart is fully regular. ###### Acknowledgements. B. C. Bag is indebted to the Council of Scientific and Industrial Research (C.S.I.R.), Govt. of India, for partial financial support. Figure Captions 1. Plot of $`x`$ vs $`p`$ on the Poincare surface of section ($`X=0`$) for Eq.(13) with initial condition $`x=2.512`$, $`p=0`$, $`X0`$, $`P=0`$. (Units are arbitrary). 2. Plot of $`x`$ vs $`p`$ for Eq.1 with Hamiltonian (10) and initial condition $`x=2.512`$ and $`p=0.0`$. 3. Same as in Fig.1 but for $`x=2.509`$ and $`p=0.0`$. 4. Same as in Fig.1 but for $`x=2.5093`$, and $`p=0`$. The observations are taken for the time intervals $`(a)t=0\mathrm{to}\mathrm{\hspace{0.33em}1293}T`$ (left well), $`(b)t=1293T\mathrm{to}\mathrm{\hspace{0.33em}4291}T`$ (right well) and $`(c)t=4291T\mathrm{to}\mathrm{\hspace{0.33em}7260}T`$ (left well). \[T (=$`\frac{2\pi }{\mathrm{\Omega }}`$) is the time period of the driving field\].
no-problem/9909/cond-mat9909285.html
ar5iv
text
# Critical statistics in a power–law random banded matrix ensemble ## Abstract We investigate the statistical properties of the eigenvalues and eigenvectors in a random matrix ensemble with $`H_{ij}|ij|^\mu `$. It is known that this model shows a localization–delocalization transition (LDT) as a function of the parameter $`\mu `$. The model is critical at $`\mu =1`$ and the eigenstates are multifractals. Based on numerical simulations we demonstrate that the spectral statistics at criticality differs from semi–Poisson statistics which is expected to be a general feature of systems exhibiting a LDT or ‘weak chaos’. In a recent paper Bogomolny et al. have investigated a number of dynamical systems and found remarkable similarities between the spectral statistics of pseudointegrable billiards and the critical statistics in the Anderson model at the mobility edge. The latter has been investigated numerically by a number of authors . At the metal–insulator transition (MIT) properties intermediate between those predicted by random matrix theory (RMT) and those of uncorrelated spectra with Poisson statistics were found. On one hand the spectral statistics for small differences in energy is reminiscent of the universal level repulsion in metals, i.e. the nearest neighbor level spacing distribution $`P(s)`$ behaves as $`P(s)s^\beta `$ for $`s0`$, with $`\beta =1`$, 2, or 4 for orthogonal, unitary, or symplectic symmetry of the system. On the other hand for $`s1`$, $`\mathrm{ln}P(s)as`$ with a constant $`a`$ depending weakly on $`\beta `$ but strongly on the dimensionality $`d`$, which is reminiscent of uncorrelated energy spectra in Anderson insulators. These results were found in numerical simulations on $`d`$–dimensional cubic systems using periodic boundary conditions (BC). Recently Braun et al. discovered that the shape of $`P(s)`$ depends strongly on the choice of the BCs. Upon averaging over BCs they concluded that for orthogonal symmetry $`P(s)`$ is very close to the semi–Poissonian form $$P(s)=4se^{2s}.$$ (1) The energy level statistics is universal at the MIT in the sense that the level number variance $`\mathrm{\Sigma }^2(\overline{N})=(\delta N)^2`$ is proportional to the mean number of levels $`\overline{N}1`$ and the coefficient $`\chi `$ is independent of the BCs. Dependence of the $`P(s)`$ function on the BCs in $`d=2`$ quantum–Hall systems and appearance of the semi–Poisson statistics for a $`d=2`$ Anderson model with symplectic symmetry obtained as an average over the BCs have also been reported since then. Interestingly similar spectral statistics has been obtained for the case of two interacting particles in a one dimensional disordered system at the interaction strength producing maximal mixing of the noninteracting basis. At the transition point (MIT) the statistical properties of the spectra and of the eigenstates are linked to one another. A remarkable relation between the level compressibility $`\chi `$ and the density correlation dimension of the eigenstates $`D_2`$ has been derived in , $$\chi =\frac{1}{2}\left(1\frac{D_2}{d}\right).$$ (2) The dimension $`D_2`$ describes how the fourth moment of the components of an eigenfunction $`\psi `$ scales with the linear length of the system . Generally, the $`2p`$-th moment scales as $`_i|\psi _i|^{2p}L^{D_p(p1)},`$ where in the case of multifractality $`D_p`$ is a nonlinear function of $`p`$. The same parameter $`D_2`$ describes the scaling of the probability overlap of two states with an energy separation substantially exceeding the mean level spacing; hence the name of density correlation dimension. It has been shown that the heavily fluctuating local densities still produce a considerable overlap so that at the MIT level repulsion is still present. In Ref. quantum chaotic systems were numerically compared to one of the simplest models that provides semi–Poissonian statistics: the short–range plasma model (SRPM). The model describes $`N`$ levels that repel each other logarithmically as in conventional RMT , but with the interaction restricted to nearest neighbors only. For this SRPM many quantities can be computed analytically in the large $`N`$ limit. The spacing distribution is given by Eq. (1), the two–level correlation function reads $$R(s)=1e^{4s}.$$ (3) The corresponding level number variance, $$\mathrm{\Sigma }^2(L)=\frac{L}{2}+\frac{1}{8}(1e^{4L})$$ (4) leads to the level compressibility $`\chi =1/2`$ which is also the upper limit assumed by the right hand side of Eq. (2). Another way of obtaining semi–Poissonian statistics is to leave out every other element from an otherwise uncorrelated sequence of levels. Such a ’daisy’ model has been studied recently in Ref. . By allowing the logarithmic repulsion to act without limits in the SRPM one recovers the RMT result in which the level spacing distribution is well approximated by the Wigner–surmise $$P(s)=\frac{\pi }{2}se^{\frac{\pi }{4}s^2},$$ (5) and the the two–level correlation function is given by $$R(s)=1c^2(s)\frac{dc(s)}{ds}_s^{\mathrm{}}c(s^{})𝑑s^{}$$ (6) with $`c(s)=\mathrm{sin}(\pi s)/\pi s`$. From this correlation function the level number variance of standard RMT (up to $`1/L`$ corrections) follows $$\mathrm{\Sigma }^2(L)=\frac{2}{\pi ^2}\left[\mathrm{ln}(2\pi L)+1+\gamma \frac{\pi ^2}{8}\right]$$ (7) where $`\gamma =0.5772\mathrm{}`$ denotes Euler’s constant. The level compressibility vanishes; the levels can be thought of as particles of an incompressible fluid . In the present paper we investigate the critical spectral statistics and the multifractalilty of the eigenstates in a random matrix model originally proposed by Mirlin et al. and later discussed in . The $`N\times N`$ matrices in this model are real symmetric and all entries are drawn from a normal distribution with zero mean, $`H_{ij}=0`$. The variance depends on the distance of the matrix element from the diagonal , $$(H_{ij})^2=[1+(|ij|/B)^{2\mu }]^1.$$ (8) In Ref. it has been shown using field theoretical methods that for a fixed $`B1`$ the statistical properties of such matrices for $`\mu <1`$ resemble those of RMT. On the other hand values $`\mu >1`$ in the limit $`N\mathrm{}`$ lead to uncorrelated spectra similarly as in the case of banded random matrices . The case $`\mu =1`$ was proven to be of special importance for it produces critical (multifractal) eigenstates and critical statistics. Due to the simplicity of the basic model and the possibility of analytical treatment of a MIT further details have been revealed recently by Kravtsov and Mirlin . Here we compare the statistical properties of energy spectra and eigenfunction properties of this random matrix model at $`\mu =1`$ with the semi–Poisson statistics of the SRPM in a regime so far inaccesable to field theoretical methods, at $`B=1`$. We study the shape of the nearest neighbor spacing distribution $`P(s)`$, the two–level correlation function $`R(s)`$ and obtain the spectral comressibility $`\chi `$ from the asymptotic behavior of the number variance $`\mathrm{\Sigma }^2`$. We also show the multifractality of the eigenstates and provide the correlation dimension $`D_2`$. We will show that for $`B=1`$ relation (2) is satisfied provided that the spectra and the eigenstates are limited to a small portion around the middle of the band. In our numerical investigation we have collected the spectra of $`N\times N`$ matrices for $`N=800\mathrm{}4800`$. The power law nature of the problem did not allow the use of efficient algorithms (e.g. the Lanczos–algorithm) usually applied for the study of the MIT in the Anderson model. We have performed a very careful unfolding since the $`N\mathrm{}`$ and $`L\mathrm{}`$ limits are approached very slowly. Based on the values of $`N`$ we considered, we were able to extrapolate the expected behavior of several quanities in the $`N\mathrm{}`$ limit. In all of our subsequent discussion we limit ourselves to the middle half of the spectra both for the eigenvalues and for the eigenvectors. First the density of states of the model is plotted in Fig. 1. The rescaled function is different from the semi–circle law. In Fig. 2 we show that the $`P(s)`$ is independent of $`N`$ and shows a non–negligible deviation from the semi–Poissonian (1) form. The deviation is seen both for the low–$`s`$ and the large–$`s`$ part of the function. The low–$`s`$ part of the $`P(s)`$ is of the form $`P(s)s`$, but with a different prefactor. The large–$`s`$ behavior differs considerably from the $`\mathrm{ln}P(s)2s`$ form expected from the semi–Poisson distribution (1). The deviation from the semi–Poisson statistics seems to persist in the $`N\mathrm{}`$ limit. The ‘peakedness’ parameter ($`q=s^2^1`$) of the $`P(s)`$ converges to a value $`q=0.7342\pm 0.0009`$ larger than that calculated for the semi–Poisson distribution (1) for which $`q=2/3`$. We see that the shape of the $`P(s)`$ is indeed intermediate between the semi–Poisson and the Wigner–surmise. The spacing distribution in a similar random matrix model has been studied by Nishigaki whose result applied for our case is $`q=0.7624`$ which is close to our value. Fig. 3 shows that the level number variance $`\mathrm{\Sigma }^2(L)`$ converges to a function that has a linear part, $`\mathrm{\Sigma }^2(L)\chi L`$ with a slope less than unity, $`\chi <1`$, showing a nonzero compressibility of the spectra. However, this compressibility seems to converge to a value smaller than the $`\chi =0.5`$ of the SRPM. This behavior is reflected also in the two–level correlation function that shows deviations from Eq. (3), while it is better described by the function calculated in . It is of the form of (6) with $`c(s)=0.25\mathrm{sin}(\pi s)/\mathrm{sinh}(\pi s/4)`$. The same kernel applies also for models discussed in Refs. . We have checked the validity of the relation (2) by comparing the level compressibility obtained from the energy spectra with the value extracted from the multifractal dimension $`D_2`$ of the eigenstates. The results are shown in Fig.4. As $`N\mathrm{}`$ the data extrapolate nicely to a common value of $`\chi =0.169\pm 0.019`$ that is less than the value 0.5 expected from the SRPM and is also smaller than the value $`0.27`$ found at the Anderson transition . Our result is in full aggreement with the recent analytical estimate of $`\chi =1/(2\pi )`$ obtained in Refs. and also earlier in an analogous random matrix model . Note that the $`D_2`$ value is obtained as an average of the states in the middle of the band. The inset of Fig.4 shows that indeed the variance of $`D_2`$ decreases with increasing $`N`$ while taking the full set of eigenstates results in a distribution of these exponents that remains broad even in the $`N\mathrm{}`$ limit a phenomenon already noticed in and discussed in . In Table I we collected the relevant quantites obtained from extrapolations to the thermodynamical limit. The values of $`D_2`$ and $`\alpha _0`$ were obtained as an average of the corresponding exponents over the ensemble of states in the middle half of the spectrum. The $`D_2`$ and $`\alpha _0`$ for each state were obtained using the standard box counting algorithm . A further evidence of the multifractal behavior of the eigenstates is also presented in the distribution function of the eigenvector components $`P(Q)`$ with $`Q|\psi |^2`$. It is broad at all length scales, i.e. a close to log-normal form is expected. In Fig. 5 the inset shows a very nice normal distribution of the variable $`\mathrm{ln}Q`$ plotted after rescaling with the mean and the variance of it. However, the figure clearly shows deviation from the log-normal form especially for the low-$`y`$ part of the distribution that may be described with a power law tail of the form $`y^{1.6}`$. Similar deviations have already been detected and studied at the quantum–Hall transition and are clearly seen at the Anderson–transition in $`d=3`$ systems , as well. In summary we have investigated the spectral properties of a random matrix ensemble with entries decaying away from the diagonal in a power–law fashion. We found that the spacing distribution function is similar to but deviates significantly from the semi–Poisson distribution. We have also compared the numerically obtained functions $`\mathrm{\Sigma }^2(L)`$ and $`R(s)`$ with the analytically known ones of the SRPM which produces semi–Poissonian $`P(s)`$. These functions as well as the extrapolated value of the level compressibility $`\chi `$ differ from those obtained for the SRPM. The latter agrees well with analytical results. We also confirm the existense of multifractal states that are characterized by a correlation dimension $`\stackrel{~}{D}_2`$. According to our numerical results the correlation dimension satisfies the relation (2) with $`\chi `$ obtained from the spectra. Acknowledgment: We are indebted to F. Evers, Y. Fyodorov, M. Janssen, V.E. Kravtsov, A.D. Mirlin, T.H. Seligman, and P. Shukla for stimulating discussions and S.M. Nishigaki for providing his Mathematica code in order to calculate the $`P(s)`$ for the model considered in Ref. . Financial support of I.V. from the Alexander von Humboldt Foundation and from Országos Tudományos Kutatási Alap (OTKA), Grant Nos. T029813, T024136 and F024135 are gratefully acknowledged.
no-problem/9909/hep-ph9909482.html
ar5iv
text
# Restoring the sting to metric preheating ## I Introduction Standard inflationary models must end with a phase of reheating during which the inflaton, $`\varphi `$, transfers its energy to other fields. Reheating itself may begin with a violently nonequilibrium “preheating” era, when coherent inflaton oscillations lead to resonant particle production (see and refs. therein). Until recently, preheating studies implicitly assumed that preheating proceeds without affecting the spacetime metric. In particular, causality was thought to be a “silver bullet,” ensuring that on cosmologically relevant scales, the non-adiabatic effects of preheating could be ignored. In fact, exciting, super-Hubble effects are possible during preheating, and metric perturbations may be resonantly amplified on all length scales . Causality is not violated precisely because of the huge coherence scale of the inflaton immediately after inflation (see also ). Strong preheating (with resonance parameter $`q1`$; see for overviews and notation) typically leads to resonant amplification of scalar metric perturbation modes $`\mathrm{\Phi }_k`$, including those on super-Hubble scales (i.e., $`k/aH1`$, where $`a`$ is the scale factor and $`H`$ the Hubble rate). One of our aims is to answer the question “how typical is typical?” The answer is crucial since preheating can lead to distortions in the anisotropies in the cosmic microwave background (cmb). Observational limits rule out those models that produce unbridled nonlinear growth, but models which pass the metric preheating test on cobe scales may nevertheless leave a non-adiabatic signature of preheating in the cmb. Hence one can no longer universally avoid consideration of reheating when analyzing inflationary predictions for cosmology, even if the final effect of reheating in some particular models is small. In this vein, it has been argued recently that metric perturbations on super-Hubble scales are in fact immune to metric preheating in the archetypal 2-field potential typically used in earlier studies . The claim arises because the initial value of the fluctuations in the created bosonic field $`\chi `$ at the start of preheating is much smaller than that used in . The basic argument is as follows. For the coupling $`\frac{1}{2}g^2\varphi ^2\chi ^2`$, strong preheating typically requires $`qg^2\varphi ^2/m^21`$ (exceptions exist in which $`q`$ is small but metric preheating is strong ). This increases the effective $`\chi `$ mass relative to the Hubble rate during inflation, $`m_{\chi ,\mathrm{eff}}g\varphi Hm`$, where $`m`$ is the inflaton mass. This leads to an exponential suppression $`a^{3/2}`$ of both $`\chi `$ and $`\delta \chi _k`$ during inflation; hence these fields would have values at the start of preheating around $`10^{36}`$ smaller than those used in all previous simulations. This would stifle any growth in the small-$`k`$ modes of $`\mathrm{\Phi }`$ until late times. Initial conditions for large-$`k`$ modes, in contrast, are claimed to be unaffected, so that they would grow nonlinear first. Their resulting backreaction would then end the resonance before any interesting effects occur on cosmologically significant scales . Irrespective of super-Hubble behavior, we note that non-perturbative metric-preheating effects are vital on smaller scales , and this in itself is a major departure from the old theory that neglects metric perturbations in preheating. Metric preheating leads to interesting possibilities, such as significant primordial black hole formation (see also ). Returning to super-Hubble scales, $`k/aH1`$, we will show that the above suppression mechanism is highly sensitive to the particular form of interaction Lagrangian, while metric preheating is not. Indeed, the suppression of $`\chi `$ and $`\delta \chi _k`$ at the start of preheating argued for in is absent for models in either of the following two classes: Class I \- Models in which the vacuum expectation value (vev) of $`\chi `$ is nonzero during inflation. Class II \- Models in which the $`\chi `$ effective mass is small during inflation but undergoes a transition and becomes large during preheating. Since these possibilities arise naturally in a variety of realistic particle physics models, we conclude that the suppression mechanism proposed recently is fragile, i.e. unstable to small changes in the potential. On the other hand, resonant growth of super-Hubble metric perturbations in preheating is robust, since it persists under small changes of the potential. The fields split into a homogeneous part and fluctuations: $`\varphi _I(t,𝐱)=\phi _I(t)+\delta \varphi _I(t,𝐱)`$. The background equations are $$H^2=\frac{1}{3}\kappa ^2\left[V+\frac{1}{2}\dot{\phi }_I^2\right],\ddot{\phi }_I+3H\dot{\phi }_I+V_I=0,$$ (1) where $`\kappa ^28\pi /M_{\mathrm{pl}}^2`$ and $`V_IV/\phi _I`$. The linearized equations of motion for the Fourier modes of field ($`\delta \varphi _{Ik}`$) and scalar metric fluctuations ($`\mathrm{\Phi }_k`$) are $`\left(\delta \varphi _{Ik}\right)^{}+3H\left(\delta \varphi _{Ik}\right)^{}+(k^2/a^2)\delta \varphi _{Ik}=`$ (2) $`{\displaystyle V_{IJ}\delta \varphi _{Jk}}+4\dot{\phi }_I\dot{\mathrm{\Phi }}_k2V_I\mathrm{\Phi }_k,`$ (3) $`\dot{\mathrm{\Phi }}_k+H\mathrm{\Phi }_k=\frac{1}{2}\kappa ^2{\displaystyle \dot{\phi }_I\delta \varphi _{Ik}}.`$ (4) This system is subject to the constraint $$\left[\frac{k^2}{a^2}\frac{1}{2}\kappa ^2\dot{\phi }_I^2\right]\mathrm{\Phi }_k=\frac{1}{2}\kappa ^2\dot{\phi }_I^2(\delta \varphi _{Ik}/\dot{\phi }_I)^{},$$ (5) which we use to check the accuracy of our numerical integrations of Eqs. (3) and (4) and to set $`\mathrm{\Phi }_k`$ initial conditions. We envisage a model with the inflaton $`\varphi _1\varphi =\phi (t)+\delta \varphi (t,𝐱)`$ coupled to a massless scalar field $`\varphi _2\chi =X(t)+\delta \chi (t,𝐱)`$ (assumed to be in its vacuum state near the end of inflation). This schematically represents the particle content of the inflationary and preheating eras. More realistic models should consider the gauge group, non-minimal coupling and fermionic effects, and of course an accurate phenomenology of metric preheating must begin to study these issues . However, since we are interested only in essential conceptual points, this simple picture will suffice for now. ## II Super-Hubble metric preheating The often-used interaction term $`\frac{1}{2}g^2\varphi ^2\chi ^2`$ is not the only coupling appropriate to preheating, but is rather one simple example for which resonance occurs. As we show below, additional couplings linear in $`\chi `$, as well as quadratic couplings in which $`g^2<0`$, provide a mechanism for escaping the inflationary suppression claimed in . Essentially, these alternatives produce a nonzero attractor $`X0`$, to which inflation drives the $`\chi `$ field, so that the initial values of $`X`$ and $`\delta \chi _{k0}`$ at preheating are not suppressed. These possibilities are incorporated in the effective potential $`V`$ $`=`$ $`\frac{1}{2}m^2\varphi ^2+\frac{1}{4}\lambda \varphi ^4+\frac{1}{4}\lambda _\chi \left(\chi ^2\sigma ^2\right)^2`$ (7) $`+\frac{1}{2}ϵg^2\varphi ^2\chi ^2+\stackrel{~}{g}^2\kappa ^{n3}\varphi ^n\chi ,`$ for the constants $`ϵ=\pm 1`$ and $`n=2,3`$. The $`\lambda `$, $`\lambda _\chi `$ terms ensure that $`V`$ is bounded from below. The various terms in this potential are phenomenologically well-motivated: $``$ In theories where supersymmetry (susy) is softly broken, the potential will only acquire logarithmic radiative corrections and the suppression may apply. However, in realistic models with gravity, susy is replaced by supergravity (sugra), and sugra models often contain couplings of the form $`\varphi ^n\chi `$, $`n=2,3`$ . $``$ Even if $`\stackrel{~}{g}=0`$ initially, if the $`\chi `$ field exhibits symmetry-breaking ($`\sigma 0`$), shifting the field $`\chi \chi \sigma `$ generates the linear term $`\sigma g^2\varphi ^2\chi `$ via the quadratic coupling. The possible importance of symmetry breaking of this sort has long been noted for its role in generating single-body inflaton decays and hence complete inflaton decay. If we choose $`\sigma `$ to correspond to the grand unified theory (gut) scale, then $`\sigma /M_{\mathrm{pl}}\stackrel{~}{g}^2/g^210^3`$. $``$ Negative coupling instability (nci) models ($`ϵ=1`$) are dominated by the coupling $`g^2\varphi ^2\chi ^2`$, and the $`\chi `$ field is driven to a nonzero vev during inflation . $``$ A fermionic coupling $`h\overline{\psi }\chi \psi `$, would lead to a driving term $`h\overline{\psi }\psi `$ in the $`\chi `$ equation of motion. This would have a similar effect of giving a nonzero vev for $`\chi `$. ### A Class I: unsuppressed initial conditions We now present analytical arguments (assuming for simplicity that $`\sigma =0`$) to show that the new couplings avoid the claimed suppression of super-Hubble $`\chi `$ fluctuations. By Eq. (1), the background $`X`$ field obeys $$\ddot{X}+3H\dot{X}+ϵg^2\phi ^2X+\lambda _\chi X^3=\stackrel{~}{g}^2\kappa ^{n3}\phi ^n,$$ (8) and by Eq. (3), its large-scale fluctuations satisfy $`(\delta \chi _k)^{}+3H(\delta \chi _k)^{}+[ϵg^2\phi ^2+3\lambda _\chi X^2]\delta \chi _k=`$ (9) $`4\dot{X}\dot{\mathrm{\Phi }}_k+[(2n)\stackrel{~}{g}^2\kappa ^{n3}\phi ^{n1}+2\lambda _\chi X^3/\phi ]\delta \varphi _k,`$ (10) where we used the slow-roll relation $`\mathrm{\Phi }\delta \varphi /\phi `$. We now consider the two separate cases with similar results: Case 1: $`ϵ=1,\stackrel{~}{g}>0`$, $`\lambda _\chi `$ negligible Using the fact that $`\phi ,H`$ constant during inflation, we see that while the solution of the homogeneous part of Eq. (8) decays rapidly towards zero as $`a^{3/2}`$, the particular solution arising from the inhomogeneous term is approximately constant. It follows that $`\chi `$ emerges at the end of inflation ($`t=t_0`$) with background part $$X(t_0)(\stackrel{~}{g}/g)^2\kappa ^{n3}[\phi (t_0)]^{n2},$$ (11) where $`\phi (t_0)0.3M_{\mathrm{pl}}`$. Similarly, the fluctuations also have a nontransient solution. For $`n=2`$, we need to include the small term $`\dot{X}\dot{\mathrm{\Phi }}_k`$, which is not straightforward to evaluate, but for $`n=3`$ we can neglect this term, and Eq. (10) implies $$\delta \chi _k(t_0)(\stackrel{~}{g}/g)^2\delta \varphi _k(t_0).$$ (12) Thus the super-Hubble $`\chi `$ fluctuations emerge from inflation unsuppressed, though smaller than the inflaton fluctuations by a factor $`(\stackrel{~}{g}/g)^2`$. Case 2: $`ϵ=1,\stackrel{~}{g}=0`$ nci coupling gives rise to a non-zero vev, since Eq. (8) has an attractor solution ($`\dot{X}0`$), and then using this solution, the fluctuations governed by Eq. (10) are also seen to have an approximate attractor solution: $$X(t_0)(g/\sqrt{\lambda _\chi })\phi (t_0),\delta \chi _k(t_0)(g/\sqrt{\lambda _\chi })\delta \varphi _k(t_0).$$ (13) In both cases, for consistency, inflation should be dominated by the $`\frac{1}{2}m^2\varphi ^2`$ term in the potential, and the super-Hubble fluctuations should be dominated by adiabatic inflaton fluctuations. The equations for $`\phi `$ and $`\delta \varphi `$ show that this will be secured if $`\lambda `$ is negligible and $`|ϵg^2X^2+n\stackrel{~}{g}^2\kappa ^{n3}\phi ^{n2}X|m^2`$, given Eqs. (11)–(13). In summary, our analytical arguments show that by the end of inflation, the $`\chi `$ field and its super-Hubble fluctuations are not negligibly small; the linear couplings ($`\stackrel{~}{g}>0`$, $`ϵ=1`$) and the negative quadratic coupling ($`ϵ=1`$, $`\stackrel{~}{g}=0`$) each provide a mechanism to evade the super-Hubble suppression of $`\chi `$ fluctuations. ### B Class I: numerical simulations In order to confirm and extend the analytical arguments above, we performed numerical simulations in one Class I model, with $`\stackrel{~}{g}^2\varphi ^3\chi `$ coupling ($`ϵ=1`$, $`\lambda `$ and $`\lambda _\chi `$ negligible). To avoid subtleties associated with matching inflation to preheating, we numerically integrated Eqs. (1)–(4) starting deep inside the inflationary era. Our primary interest is in cosmologically relevant scales, so we follow the evolution of a scale that crosses the Hubble radius at $`t=t_{\mathrm{in}}`$, about 55 $`e`$-folds before the start of preheating at $`t=t_0`$. The slow-roll approximation gives $`𝒩=\kappa ^2(\phi _{\mathrm{in}}^2\phi _0^2)/4`$ for the number of $`e`$-folds before the end of inflation, so we choose $`\phi _{\mathrm{in}}=3M_{\mathrm{pl}}`$ to get $`𝒩55`$. For the background $`\chi `$ field we use the approximate dominant solution in Eq. (11) and take $`X_{\mathrm{in}}=(\stackrel{~}{g}/g)^2\phi _{\mathrm{in}}`$. We follow and take the field fluctuations at Hubble-crossing ($`k=aH`$) as $`k^3|\delta \varphi _{Ik}|^2=H^3/(2\omega _{Ik})`$ and $`|(\delta \varphi _{Ik})^{}|=\omega _{Ik}|\delta \varphi _{Ik}|`$, where $`\omega _{Ik}^2=(k/a)^2+m_I^2`$, with $`m_\chi =g\phi `$. We also take $`\dot{X}_{\mathrm{in}}=\omega _\chi X_{\mathrm{in}}`$. The initial metric perturbation $`(\mathrm{\Phi }_k)_{\mathrm{in}}`$ is then fixed by Eq. (5). The comoving wavenumber is $`kma(t_0)e^𝒩\kappa \phi _{\mathrm{in}}/\sqrt{6}`$. We also take $`\stackrel{~}{g}/g10^2`$, with $`g=\sqrt{4\pi /3}\times 10^3`$ and $`m=10^6M_{\mathrm{pl}}`$. This yields a resonance parameter $`q=3.8\times 10^5`$ which is used for all our simulations here. As well as tracking a scale that crosses the Hubble radius at $`t_{\mathrm{in}}`$, we consider scales that are within the Hubble radius at $`t_0`$, i.e., at the start of preheating, with $`k/a_0H_0>1`$. Although fluctuations on these small scales are cosmologically insignificant, we need to compare their evolution with those on very large scales, since this has a bearing on the question of backreaction. The initial amplitude is given by $`a_0^3|\delta \varphi _{Ik}|^2=1/(2\omega _{Ik})`$, and for $`k/a_0g\phi _0m`$ we find that $`|\delta \varphi _{Ik}(t_0)|=1/(a_0\sqrt{2k})`$. The numerical results summarized in Fig. 1 confirm the analytical discussion above. The field and metric fluctuations on cosmological scales are resonantly amplified as expected. The curvature perturbation $`\zeta =\mathrm{\Phi }H(\dot{\mathrm{\Phi }}+H\mathrm{\Phi })/\dot{H}`$, which would remain constant in standard reheating on super-Hubble scales in adiabatic models, instead shows violently non-adiabatic growth. This resonant amplification will be terminated by backreaction effects, which are governed by the growth of the variance $`\chi ^2𝑑kk^2|\delta \chi _k|^2`$ (suitably renormalized and regularized ). Resonant growth on small scales will reinforce the backreaction, since the $`k^2`$ factor will weight the sub-Hubble contribution more strongly. Our simulations indicate that for the chosen value of $`q`$, resonance occurs for sub-Hubble scales with $`1k/a_0H_0<100`$ at the start of preheating, which occurs at $`mt_020`$. In Fig. 2 we plot the fluctuations for a mode with $`k/a_0H_0=10`$. In addition to the resonance, this shows that nonlinear growth in the sub-Hubble mode occurs before that of the super-Hubble mode of Fig. 1. Nonlinear growth of the super-Hubble modes may therefore be prevented, but since these modes begin to grow resonantly soon after the sub-Hubble modes ($`\mathrm{\Delta }mt20`$), we can expect some preheating growth in the power spectrum on cosmological scales. For other values of $`q`$ we expect that super-Hubble modes may grow the quickest, as explicitly occurs in some models which can be studied analytically . The study of backreaction (including both gravitational and matter-field contributions), and of the preheating imprint on the power spectrum, is currently in progress. An indication of how the strength of the super-Hubble resonance in $`\mathrm{\Phi }_k`$ is affected by changes in the coupling strength $`\stackrel{~}{g}/g`$ is given in Fig. 3. Here we have plotted the time $`t_{\mathrm{nl}}`$ when the metric and field fluctuations grow to be nonlinear, i.e., $`|k^{3/2}\mathrm{\Phi }_k(t_{\mathrm{nl}})|1`$, $`|k^{3/2}\delta \varphi _{Ik}|M_{\mathrm{pl}}`$. The results show how $`t_{\mathrm{nl}}`$ increases in response to the suppression of initial conditions that occurs as $`\stackrel{~}{g}`$ is decreased. Note that synchronization occurs: all fluctuations share roughly the same $`t_{\mathrm{nl}}`$ values so that we expect $`\mathrm{\Phi }^2\varphi ^2/M_{\mathrm{pl}}^2,\chi ^2/M_{\mathrm{pl}}^2`$. The importance of metric perturbations in determining backreaction has been independently noted in recent work . ### C Class II models In Class II models, the $`\chi `$ effective mass is simply very small during inflation but then becomes large at preheating. This occurs naturally in various models: $``$ Globally susy hybrid models based on the superpotential $`W=\alpha S\overline{\phi }\phi \mu ^2S`$, where the singlet $`S`$ plays the role of the inflaton. The corresponding unbroken potential is $`V=\alpha ^2|S^2|(|\phi |^2+|\overline{\phi }|^2)+|\alpha \phi \overline{\phi }\mu ^2|^2`$, together with $`D`$-terms which vanish along the flat direction $`|\phi |=|\overline{\phi }^{}|`$. For $`S\mu /\sqrt{\alpha }`$, inflation occurs with the minimum of the potential at $`\phi =\overline{\phi }=0`$. However, for $`S\mu /\sqrt{\alpha }`$, $`V`$ has a new minimum at $`S=0,\phi =\mu /\sqrt{\alpha }`$ and preheating occurs via oscillations around this minimum . Now let us couple $`\chi `$ not to the inflaton $`S`$, but to the field $`\phi `$ through the term $`g^2\chi ^2|\phi |^2`$. Then the $`\chi `$ effective mass $`g|\phi |`$ vanishes during inflation (up to logarithmic corrections) – and hence so does the suppression mechanism of . The effective mass only departs strongly from zero once inflation ends and reheating begins, leading to a huge increase in the value of the resonance parameter $`q`$. $``$ In models with strong running of coupling constants, where the beta function is negative, such as occurs in qcd, the theory is asymptotically free and the coupling increases at lower energies. Perhaps the strongest examples of this are based on $`S`$-type dualities, where the coupling $`g^2`$ is very small during inflation but is very large during reheating, which occurs in the strongly coupled phase with dual coupling $`1/g^21`$. An example is provided by ‘dual inflation’ , where $`m_{\chi ,\mathrm{eff}}g\varphi <H`$, and $`\chi `$ fluctuations are similar to those in the inflaton, and not strongly suppressed. In fact, it is arguable that models of this sort are needed if preheating is to be viable in non-susy theories, since large $`g`$ leads to radiative corrections to the potential which may violate the slow-roll conditions for inflation. ## III New Cosmological effects Our eventual goal must be to calculate physical quantities such as the power spectrum of $`\mathrm{\Phi }_k`$. Since $`P_\mathrm{\Phi }=k^3|\mathrm{\Phi }_k|^2/2\pi ^2`$, one might be concerned that these strong preheating effects at $`k0`$ would be made irrelevant by the $`k^3`$ phase space factor. Perhaps the easiest way to see that this is not so is to look at the evolution of $`\zeta _k`$. Since $`\zeta _k`$ is not conserved for small $`k`$ (see Fig. 1), the standard normalization of the cmb spectrum is increased. This can only take place if the power spectrum of metric fluctuations is strongly affected as $`k0`$. This is understandable since preheating acts only as a non-trivial transfer function $`T(k)`$. Beyond the effects discussed in , metric preheating can lead to a host of interesting new effects. $``$ The growth of $`\zeta _k`$ implies amplification of isocurvature modes in unison with adiabatic scalar modes on super-Hubble scales. Preheating thus yields the possibility of inducing a post-inflationary universe with both isocurvature and adiabatic modes on large scales. If these are uncorrelated and of roughly equal strength, the corresponding Doppler peaks will tend to cancel . (This mechanism is independent of the one discussed in , which requires nonlinearity to persist until decoupling.) However, if the adiabatic and isocurvature modes are strongly correlated, this would create the possibility of a “smoking gun” finger-print of preheating. The challenge remains to distinguish such correlations from those induced in hybrid inflation. $``$ Because the metric perturbations can go nonlinear, whether on sub- or super-Hubble scales, the corresponding $`\chi `$ density perturbations $`\delta `$ typically have non-Gaussian statistics. This is simply a reflection of the fact that $`1\delta <\mathrm{}`$, so that the distribution of necessity becomes skewed and non-Gaussian. Further, in Class II models, where $`\chi =0`$ during inflation, $`\chi `$ perturbations in the energy density will necessarily be non-Gaussian (chi-squared distributed), even if $`\delta \chi _k`$ is Gaussian distributed, since stress-energy components are quadratic in the fluctuations (see e.g. ). Non-Gaussian effects are therefore an intrinsic part of many metric preheating models (particularly those in Class II), and open up a potential signal for detection in future experiments. $``$ Another new feature we would like to identify is the breaking of conformal invariance. Once metric perturbations become large on some scale, the metric on that scale cannot be thought of as taking the simple Friedmann-Lemaitre-Robertson-Walker (flrw) form, and conformal invariance is lost. This is particularly important for the production of primordial magnetic fields, which are usually strongly suppressed due to the conformal invariance of the Maxwell equations in a flrw background. The coherent oscillations of the inflaton during preheating further provide a natural cradle for producing a primordial seed for the observed large-scale magnetic fields. A charged inflaton field, with kinetic term $`D_\mu \varphi (D^\mu \varphi )^{}`$, will couple to electromagnetism through the gauge covariant derivative $`D_\mu =_\mu ieA_\mu `$. This will naturally lead to parametric resonant amplification of the existing magnetic field, which could produce large-scale coherent seed fields on the required super-Hubble scales without fine-tuning . (Note that a tiny seed field must exist during inflation due to the conformal trace anomaly and one-loop QED corrections in curved spacetime .) In conclusion, the suppression discussed in is highly sensitive to the form of the particle interactions considered; when couplings are considered which are found in most realistic particle physics models, the effects of recede. Instead, in models from either of the two general classes highlighted here, preheating can produce a strong amplification of metric perturbations on cosmologically significant scales. Metric preheating thus allows us to rule out models in which backreaction effects fail to prevent super-Hubble nonlinear growth, and shows that in the surviving models, there will typically be some signature of preheating imprinted on the power spectrum. The robustness of the amplification further demonstrates the need to move towards more realistic models of preheating in order to develop a realistic understanding of the predictions of inflation for observational cosmology. We thank Kristin Burgess for detailed comments and Paul Anderson, Jürgen Baacke, Arjun Berera, Anthony Challinor, Fabio Finelli, Alan Guth, Karsten Jedamzik, Dimitri Pogosyan, Giuseppe Pollifrone, Dam Son, Alexei Starobinsky and David Wands for stimulating discussions. DK receives partial support from NSF grant PHY-98-02709.
no-problem/9909/astro-ph9909245.html
ar5iv
text
# Near-Infrared Spectra of Ultraluminous Infrared Galaxies ## 1 Introduction From the initial realization that Ultraluminous Infrared Galaxies (hereafter ULIRGs) represent a significant class of the highest luminosity objects in the local universe (Soifer et al., 1987; Sanders et al., 1988), a central question has been what powers this luminosity. Various studies have attempted to address this question by probing the centers of these highly dust enshrouded systems at wavelengths where the opacity of dust is vastly reduced from that found at visible wavelengths, ranging from X-ray (Rieke, 1988; Nakagawa et al., 1999) to near-infrared (Carico et al., 1990; Veilleux et al., 1999) to mid-infrared (Genzel et al., 1998; Soifer et al., 1999), as well as to radio wavelengths (Smith, Lonsdale, & Lonsdale, 1998). These studies have generally found that direct detection of AGN (Active Galactic Nucleus) properties in the nuclei of these systems is rare. In this letter we summarize the results of a near-infrared spectroscopic survey of 33 bright, nearby ULIRGs undertaken for the purpose of identifying sources where the effects of dust might be hiding the presence of an AGN in the visible spectra of ULIRGs. This is a natural explanation of the paucity of such spectra, given that the ULIRGs are known to have extremely gas and dust rich nuclear regions (Sanders et al., 1988). The reduced extinction in the near-infrared as compared to the visible (i.e. $`A_{Pa\alpha }0.1A_V`$, Rieke & Lebofsky, 1985) enables extremely sensitive searches for characteristics of AGN hidden by large columns of dust. This survey concentrates on searching for a high velocity component to the strong Pa$`\alpha `$ line, as well as the presence of the very high excitation \[Si VI\] line. The detailed results of this study will be presented elsewhere (Murphy et al., 2000). ## 2 The Sample The complete sample of 33 ULIRGs for this spectroscopic study is a subset of the sample of Strauss et al. (1990, 1992), which is a complete sample of galaxies selected to have $`S_\nu >`$1.94 Jy at 60$`\mu `$m and an infrared luminosity of $`10^{12}L_{}`$. The selection criteria for the present sample additionally require a declination $`\delta >35^{}`$, and redshift between $`0.055<z<0.108`$. This last constraint allows observations of both Pa$`\alpha `$ and Br$`\gamma `$ in the 2$`\mu `$m atmospheric window. The list of sample galaxies can be read directly out of Murphy et al. (1996) with the above redshift constraint imposed. Two of the 35 galaxies from this sublist have been removed from the present sample: IRAS 21396$`+`$3623 is actually at a redshift of $`z=0.1493`$, and IRAS 19297$``$0406, at galactic latitude $`b11^{}`$, does not appear to be properly identified as a ULIRG. ## 3 Observations and Data Reduction Observations were obtained with the Palomar Longslit Infrared Spectrograph (Larkin et al., 1996) on observing runs over the period 1996 January to 1997 December. In all cases the object was observed in at least 2 grating settings, and for about 90% of the objects 3 grating settings were employed. Each grating setting covers $`0.12\mu `$m with a scale of 0.0006 $`\mu `$m pixel<sup>-1</sup>. The slit width of 4 pixels, or 0$`\stackrel{}{\mathrm{.}}`$67 corresponds to a spectral resolution of $`R\lambda /\mathrm{\Delta }\lambda 1100`$, corresponding to $``$280 km s<sup>-1</sup>. The grating settings were chosen to cover the Pa$`\alpha `$ line at 1.8751$`\mu `$m, the suite of Br$`\delta `$, H<sub>2</sub> 1–0 S(3) and \[Si VI\] lines centered at 1.954$`\mu `$m, and if a third setting was used, the H<sub>2</sub> 1–0 S(1) and Br$`\gamma `$ lines centered at 2.14$`\mu `$m. The slit was generally rotated to coincide with the major axis of the ULIRG, where evident, or such that spectra of a secondary nucleus was simultaneously obtained when possible. Typically, integration times were 1800 s at each grating setting, with the object alternately displaced along the slit such that sky integrations were obtained simultaneously. Wavelength calibration was provided by a combination of OH airglow lines and arc lamp spectra. Atmospheric transmission is compensated by observing the nearly featureless spectra of G V stars, and the spectrum divided by a Planck curve of appropriate temperature. The Br$`\gamma `$ absorption feature in the G star is removed by interpolation. Other minor absorption features present in the G star calibration spectrum are removed using a template spectrum from Kleinmann & Hall (1986). Spectral extractions, centered on the primary nucleus of each ULIRG, were chosen to match the seeing at the time of observation, typically 0$`\stackrel{}{\mathrm{.}}`$8–1$`\stackrel{}{\mathrm{.}}`$2. Extractions on the secondary nuclei, when applicable, were performed in the same manner. A more detailed description of the observations and data reduction procedures are deferred to Murphy et al. (2000), in which the individual spectra will also be presented. ## 4 Results The most striking property of the collection of ULIRG spectra is that, with a few notable exceptions, there exists remarkable similarity among the vast majority (31 of 33) of spectra of galaxies in the sample. This similarity prompted us to combine all the similar spectra into a single, representative composite ULIRG spectrum. It is our belief that this composite spectrum provides an accurate representation of the typical ULIRG spectrum. Figure Near-Infrared Spectra of Ultraluminous Infrared Galaxies shows the median of spectra from 31 of the ULIRGs normalized to their continuum level, and shifted in wavelength corresponding to the redshift of Pa$`\alpha `$. The objects selected to be included in the combination were chosen to show no obvious evidence for broad lines or high excitation lines based on inspection of the individual spectra. Combining the spectra facilitates identification of any faint features that might be commonly present but with low signal to noise ratio in individual spectra. An average spectrum was also generated, the features of which are indistinguishable from those in the median spectrum. Specific properties and identifications of the lines in the median spectrum are discussed in Section 4.1. Figure Near-Infrared Spectra of Ultraluminous Infrared Galaxies demonstrates the range of line strengths seen in the ULIRG sample in the form of equivalent widths of the Pa$`\alpha `$ and H<sub>2</sub> 1–0 S(3) lines. Most of the ULIRGs lie within roughly one bin-width of the median value, as displayed in Figure Near-Infrared Spectra of Ultraluminous Infrared Galaxies, though there are a number of outliers with significantly higher equivalent widths than the median value. We do not find any correlation between the Pa$`\alpha `$ and H<sub>2</sub> equivalent widths. A median spectrum of 13 secondary ULIRG nuclei has also been constructed for the purpose of comparing the properties of primary and secondary nuclei. While two of the secondary nuclei exhibit featureless spectra, the median spectrum shows that secondary nuclei tend to be spectroscopically similar to the primary ULIRG nuclei. In general, all of the spectral features visible in the primary nuclei (i.e. Figure Near-Infrared Spectra of Ultraluminous Infrared Galaxies), with the exception of the He I line, are visible in the median spectrum composed of secondary ULIRG nuclei, though with lower equivalent widths. ### 4.1 Median Spectrum Line Properties #### 4.1.1 Atomic Recombination Lines As can be seen in Figure Near-Infrared Spectra of Ultraluminous Infrared Galaxies, Pa$`\alpha `$ emission dominates the near-infrared spectrum of the typical ULIRG. Other recombination lines of hydrogen, specifically Br$`\gamma `$, Br$`\delta `$, and Br$`ϵ`$ are also readily seen in the median spectrum. The extinction to the line emitting region can be crudely estimated from the ratio of the Pa$`\alpha `$ and Br$`\gamma `$ lines. The reddening derived in this way from the median spectrum is $`A_V=5`$–10 mag. There is some evidence that the extinction is higher in the secondary ULIRG nuclei. The weak line at 1.869$`\mu `$m are identified with a pair of He I recombination lines at 1.8686$`\mu `$m and 1.8697$`\mu `$m, with a centroid at 1.8689$`\mu `$m. We have found that this line is comparable in strength to the He I line at 2.058$`\mu `$m. Because of the small separation between the Pa$`\alpha `$ line and the He I line, at lower spectral resolution this line might be confused with a blue wing of the Pa$`\alpha `$ line. #### 4.1.2 Molecular Hydrogen Lines Vibration-rotation lines of H<sub>2</sub> are also prominent in the median spectrum. The H<sub>2</sub> 1–0 lines S(1), S(3), S(4), and S(5) are clearly present in the median spectrum and the S(2) line is evident at low significance. There is marginal evidence for the H<sub>2</sub> 2–1 S(2) and S(4) lines in the median spectrum, but these are at very low significance. The observed H<sub>2</sub> spectrum is consistent with purely thermal excitation at a temperature of $`T=T_{vib}=T_{rot}`$2500 K. At this temperature, contributions to the spectrum from H<sub>2</sub> 6–4 O(5) and 7–5 O(3) at $`\lambda \lambda `$1.8665, 1.8721, potential contributors to the blue wing of Pa$`\alpha `$ (e.g., Veilleux et al., 1999), are of negligible significance. The appearance of line emission at the position of the 2–1 S(4) line is inconsistent with the apparent weakness of the 2–1 S(2) line, suggesting the presence of \[Fe II\] at 2.0024$`\mu `$m. This possibility is further discussed below. At a spectral resolution of 280 km s<sup>-1</sup>, the H I lines appear to be spectrally unresolved in the median spectrum of Figure Near-Infrared Spectra of Ultraluminous Infrared Galaxies. The H<sub>2</sub> lines, on the other hand, appear to have a broader spectral profile, especially evident near the bases of these lines. Random velocity offsets between the H I and H<sub>2</sub> line emission would indeed tend to broaden the H<sub>2</sub> lines when using Pa$`\alpha `$ as the redshift reference for the combination. However, when the H<sub>2</sub> lines themselves are used as the redshift reference, the H<sub>2</sub> profiles are seemingly unaffected, with no apparent broadening of the Pa$`\alpha `$ profile. #### 4.1.3 Iron Lines A faint line appears in the median spectrum at 1.967$`\mu `$m, close to the wavelength of \[Si VI\] at 1.963$`\mu `$m. We do not identify this line with the \[Si VI\] line for two reasons. First, in two objects we directly detect the \[Si VI\] line correctly centered at 1.963$`\mu `$m (see below). In addition, there is a matching line from \[Fe II\] at 1.967$`\mu `$m. Such a feature is consistent with the known strength of \[Fe II\] lines in starburst galaxies of lower luminosity. This identification would imply that the \[Fe II\] 1.644$`\mu `$m and 1.258$`\mu `$m lines should be present at much greater strength in these spectra. While such observations are limited there is evidence, e.g., Veilleux et al. (1997) that such strong Fe lines are common in ULIRGs. The presence of the 1.967$`\mu `$m \[Fe II\] line also supports the existence of the suspected \[Fe II\] line at 2.0024$`\mu `$m, coincident with the H<sub>2</sub> 2–1 S(4) line. The 2.0024$`\mu `$m \[Fe II\] line is expected to be two to three times weaker than the line at 1.967$`\mu `$m, which is consistent with the observed low-level line at 2.003$`\mu `$m being a blend of \[Fe II\] and H<sub>2</sub> lines. ### 4.2 ULIRGs with AGN Properties Figure Near-Infrared Spectra of Ultraluminous Infrared Galaxies shows spectra of the two objects in the sample where evidence of AGN associated spectral features is clearly present in the individual object spectra. While the spectra of these two galaxies have characteristics not found in the median spectrum, these spectra are still quite similar in appearance to the median spectrum of Figure Near-Infrared Spectra of Ultraluminous Infrared Galaxies. The near-infrared spectrum of IRAS 08311$``$2459 ($`z=0.1006`$), presented in Figure Near-Infrared Spectra of Ultraluminous Infrared Galaxies, shows a strong, and slightly broad \[Si VI\] line along with the H I recombination lines and H<sub>2</sub> lines. The strength of the \[Si VI\] line is quite unusual for ULIRGs in general, being roughly equal in strength to the H<sub>2</sub> 1–0 S(3) line. The presence of the \[Si VI\] line directly demonstrates the existence of high energy photons since the ionization potential of Si<sup>4+</sup> is 167 eV. IRAS 08311$``$2459 also shows the suggestion of a broad Pa$`\alpha `$ base with an asymmetric blue wing that is not consistent with being a He I recombination line. This appears to be a high velocity component of the H I line. The near-infrared spectrum of IRAS 15462$``$0450 ($`z=0.1003`$), presented in Figure Near-Infrared Spectra of Ultraluminous Infrared Galaxies, shows a broad component of Pa$`\alpha `$ in addition to the strong narrow component of this line, as well as evidence for the He I line. There is no evidence for the \[Si VI\] line in the spectrum, though the H<sub>2</sub> 1–0 S(3) line is clearly present. For comparison, a median quasar spectrum is included in Figure Near-Infrared Spectra of Ultraluminous Infrared Galaxies. This spectrum is composed from ten quasars ranging in redshift from $`z=0.089`$–0.182, observed with the same instrument and in the same manner as that described above for the ULIRGs. The median quasar spectrum is dominated by broad Pa$`\alpha `$ emission, though a velocity-broadened blend of Br$`\delta `$, H<sub>2</sub> 1–0 S(3), and \[Si VI\] can also be discerned. The \[Si VI\] line appears to be comparable in strength to the H<sub>2</sub> 1–0 S(3) line, as is also the case for IRAS 08311$``$2459. A more detailed account of the quasar spectra will accompany the paper describing the individual ULIRG spectra (Murphy et al., 2000). ## 5 Discussion The most direct result from this work is that the great majority of ULIRGs (31 of 33) show no indication of AGN activity when viewed at 2$`\mu `$m with relatively high spectral resolution and sensitivity. Of the 33 objects observed, 90% have luminosities $`L<2\times 10^{12}L_{}`$ and of these nearly all (29/30) lack spectroscopic evidence for broad line regions or high energy photons. In the 3 objects with $`L_{ir}2\times 10^{12}L_{}`$, one shows spectroscopic evidence of AGN activity. While lacking the numbers to make a statistically significant argument, this fraction is consistent with findings in previous near-infrared spectroscopic surveys, even though our resolution and sensitivity is higher than the previous studies (Veilleux et al., 1997, 1999). Where evidence for AGN activity is present, it is readily detectable in the individual spectrum, and spectroscopic evidence for the AGN activity is also present in the visible light spectrum of the object. IRAS 15462$``$0450 is classified as a Seyfert 1 system based on its visible spectrum, while our unpublished observations of IRAS 08311$``$2459 show that it has a Seyfert 2 visible spectrum. Lutz, Veilleux, & Genzel (1999) also report that ULIRGs showing infrared signatures of the AGN phenomenon are usually classified as active galaxies based on their visible light spectra. The converse statement—that all ULIRGs showing no evidence for AGN activity in their near-infrared spectra are optically classified as non-active galaxies—seems also to be supported. Of the 16 non-AGN ULIRGs in the present sample with available optical spectroscopic identifications, only one (IRAS 14394$`+`$5332) is characterized as a Seyfert galaxy based on a marginally high \[O III\]/H$`\beta `$ ratio (Kim, Veilleux, & Sanders, 1998). While this galaxy does exhibit a blue wing on the H I and H<sub>2</sub> lines, there is no corresponding emission redward of the unresolved narrow line component. We therefore do not associate this velocity structure with a massive compact central source, but rather with organized gas motion, such as might be found in outflow phenomena. The median spectrum of the ULIRGs that do not show evidence for AGN activity presents an important new result, i.e. there are not faint broad or high excitation lines present in the spectra that are not detected in the individual spectra. The relative H I and H<sub>2</sub> line strengths in the AGN and median spectra are quite comparable, while the limits on the strength of a broad component of Pa$`\alpha `$ or \[Si VI\] 1.963$`\mu `$m are at least an order of magnitude less in the median spectrum than in the individual sources with AGNs clearly evident. Furthermore, no luminosity dependent spectroscopic trends are seen in the 31 non-AGN ULIRGs when the spectra are divided into subsets and median-combined in separate luminosity bins. In particular, there is no more evidence for low-level broad Pa$`\alpha `$ in the higher luminosity ULIRGs than there is in the lower luminosity sets or in the collection as a whole. These findings can be explained in several ways. Most obviously it could be that there are simply not AGN associated with the ULIRGs where such features are not obvious. This would imply that for $`L_{ir}<2\times 10^{12}L_{}`$, only a very small fraction of ULIRGs are powered by AGNs. Presumably the power source in these cases is star formation in extremely compact star forming regions. Alternatively, if all these sources harbor an AGN it could be that the fraction of sources with visible spectroscopic evidence for AGN activity is a measure of geometric effects, i.e. there are highly attenuating dust tori with $`A_{Pa\alpha }>`$2.5 mag covering \> 90% of the sky from the central source. Other more carefully contrived configurations of dust are also possible. While we cannot conclusively argue one way or another, the highly sensitive survey we have conducted for dust enshrouded AGN, combined with other such surveys at other wavelength is providing mounting evidence that star formation is powering the vast majority of ULIRGs with $`L_{ir}<2\times 10^{12}L_{}`$. We thank Michael Strauss for his role in defining the sample of ULIRGs, and for participating in the early stages of the Caltech effort in studying ULIRGs. We also thank Gerry Neugebauer for helpful discussions. Many persons accompanied us on the observing runs to Palomar, most notably Rob Knop and James Larkin, both of whom provided expert tutelage on the use of the spectrograph, and on methods of data reduction. We thank the night assistants at Palomar, Jaun Carasco, Rick Burruss, Skip Staples, and Karl Dunscombe. for their assistance in the observations. T.W.M. is supported by the NASA Graduate Student Researchers Program, and the Lewis Kingsley Foundation. This research is supported by a grant from the National Science Foundation.
no-problem/9909/cond-mat9909147.html
ar5iv
text
# Photoinduced IR absorption in (La1-xSrxMn)1-δO3: changes of the anti-Jahn-Teller polaron binding energy with doping ## I Introduction Manganites with the chemical formula (R<sub>1-x</sub>A<sub>x</sub>)MnO<sub>3</sub> (R and A are trivalent rare-earth and divalent alkaline-earth ions respectively) in which colossal magnetoresistance is observed are similar to high $`T_c`$ cuprates in many ways. In both families of compounds the physical properties are strongly influenced by changing carrier concentration in relevant $`d`$ and$`p`$ derived electronic bands, and the interplay between lattice and electronic degrees of freedom is essential for the low energy physics of both. It has been shown in high $`T_c`$ cuprates that measurements of the photoinduced (PI) absorption are a useful tool for investigating the nature of low-lying electronic states , especially in the range of weak hole doping where PI absorption spectra are interpreted in terms of the photon-assisted hopping of small-polarons. On the other hand features in optical and Raman scattering spectra of cubic/pseudo-cubic manganites as well as related layered compounds indicate presence of small polarons. Since photoinduced transient changes of physical properties have already been observed in this class of compounds, we expect that photoinduced infrared absorption might reveal interesting spectral information about polarons also in manganites. In this paper we present PI absorption measurements in (La<sub>1-x</sub>Sr<sub>x</sub>Mn)<sub>1-δ</sub>O<sub>3</sub> at various hole doping, concentrating mostly on the weak hole doping levels. We find clear evidence of a photoinduced mid-infrared polaronic peak and we apply theoretical analysis to the spectra in order to deduce the dependence of the polaron binding energy on doping. ## II Experimental ### A Sample preparation and experimental setup The method of preparation and characterization of ceramic samples with nominal composition (La<sub>1-x</sub>Sr<sub>x</sub>Mn)<sub>1-δ</sub>O<sub>3</sub> ($`x=0`$, 0.1, 0.2) has been published elsewhere. Since cation deficiency is common to this class of materials, we performed AC susceptibility measurements on powders in the 50K$``$350K range to establish the Mn<sup>4+</sup> content from their Curie temperatures. The the two Sr doped samples which we used had $`T_C210`$K and $`T_C340`$K (midpoint, see Fig. 1) for $`x=0.1`$ and $`0.2`$ respectively. It was found that despite the absence of Sr the sample with $`x=0`$ showed a transition to the ferromagnetic (FM) state at $`T_C170`$K. One portion of the $`x=0`$ sample was treated at 900C for 300 min in Ar flow to decrease cation deficiency. This sample showed no sign of any ferromagnetic transition and so it was concluded that $`\delta `$ is sufficiently small that it is antiferromagnetic (AFM) and insulating below $`T_N=140`$K. According to the phase diagram in Urushibara et al. the FM $`x=0`$ and $`x=0.1`$ samples are insulating below $`T_C`$, while the $`x=0.2`$ sample is metallic below $`T_C`$ and shows giant magnetoresistance (GMR) around $`T_C`$. Normal and photoinduced transmittance measurements in the MIR and NIR regions were performed by means of two separate Bomem MB series Fourier transform spectrometers set at 16 cm<sup>-1</sup> resolution. The powder samples were mixed with KBr powder in 0.1-0.2 wt. % ratio and pressed into 12 mm diameter pellets. The pellets were mounted in the Oxford liquid-He flow optical cryostat equipped with KRS-5 windows. Special care was taken that the KBr pallet was in a good thermal contact with the sample holder. The excitation CW Ar<sup>+</sup>-ion-laser light with 514.5 nm wavelength ($`h\nu =2.41`$ eV) was guided into the cryostat by an optical fibre. Due to space limitations in the cryostat the size of the spot illuminated by laser was only $`5`$ mm in diameter in the center of the pellet. The maximum excitation optical fluence $`\mathrm{\Phi }`$ was $``$500 mW/cm<sup>2</sup>. The laser excitation was switched on and off by an electro-mechanical chopper placed in the laser beam path. To minimize heating effects due to laser light absorption in the sample and instrumental drift, the PI spectra were taken by repeating one sample scan (with excitation laser on) and one reference scan (laser off) approximately every two seconds. At each temperature the thermal-difference (TD) transmittance change was also measured without laser excitation by first measuring the reference spectrum and then increasing the sample holder temperature by 2K and measuring the sample spectrum. To remove any drifts, the same procedure was then inverted to measure the sample spectrum 2K above the given temperature first. The thermal difference (TD) transmittance change was then obtained by averaging both spectra. ### B Experimental results The transmittance spectra $`𝒯`$ of the samples are shown in Fig. 1. First let us discuss the normal (non-photoinduced) infrared phonon spectral bands. In all FM samples we observe two IR phonon bands at 640 cm<sup>-1</sup> and 400 cm<sup>-1</sup>. There is no appreciable shift as a function of $`x`$ in the FM region of doping. In the AFM sample, the lower 400-cm<sup>-1</sup> phonon band is split in two bands at 376 cm<sup>-1</sup> and 420 cm<sup>-1</sup> which appear further split at the peak. There are also two additional shoulders at 457 and 509 cm<sup>-1</sup>. The appearance of the additional phonon bands in the AF phase is in accordance with expected behaviour arising from its lower point symmetry due to the static rotation and Jahn-Teller (JT) distortion of the MnO<sub>6</sub> octahedra compared to the FM phases. The high frequency phonon band is shifted downwards to 585 cm<sup>-1</sup> with respect to the FM samples and appears asymmetric with a shoulder like tail extending toward high frequencies. It is not clear whether this is a single phonon band or two overlapping bands. In addition to the phonon bands we observe a broad absorption, which increases in amplitude and shifts to lower energy forming a MIR peak as the hole doping is increased in agreement with previously reported optical data. (It should be noted that the shape of the peak in the NIR region is strongly influenced by the scattering in the KBr pellet. Fortunately, the spectral distortions due to the scattering are canceled out in PI absorption spectra.) The low temperature ($`T=25`$K) PI transmittance $`(\frac{\mathrm{\Delta }𝒯_{PI}}{𝒯})`$ spectra of all four samples are shown in Fig. 2a. In addition, the TD transmittance $`(\frac{\mathrm{\Delta }𝒯_{TD}}{𝒯})`$ spectra taken at the same temperature are shown in Fig. 2b. In the $`x=0`$ AFM sample a strong broad PI midinfrared (MIR) absorption (negative PI transmittance) centered at $`5000`$ cm<sup>-1</sup> ($`0.62`$ eV) is observed. The virtually flat TD spectrum in Fig. 2b shows that the PI absorption is clearly not due to laser heating effects. In the frequency range of the phonon bands we observe PI phonon bleaching in the range of the 585-cm<sup>-1</sup> phonon band and a slight PI absorption below 580 cm<sup>-1</sup>. The PI phonon bleaching consists of two peaks at 600 and 660 cm<sup>-1</sup> respectively with a dip in-between at 630 cm<sup>-1</sup>. This two PI transmission peaks are reproducible among different runs, while the structure of the PI absorption below 580 cm<sup>-1</sup> is not, and presumably arises due to increasing instrumental noise at the lower end of the spectral range. In the $`x=0`$ FM sample there is a much weaker PI absorption centered around $`3000`$ cm<sup>-1</sup> (0.37 eV). A comparison with the TD spectrum, which shows TD transmission below 4000 cm<sup>-1</sup>, shows that the PI absorption is not thermally induced. The spectra in the $`x=0.1`$ and $`x=0.2`$ samples show no significant PI signal and are flat within the noise level. TD transmission below 2000 cm<sup>-1</sup> centered around 1200 cm<sup>-1</sup> is observed in the $`x=0.1`$ sample. The absence of any PI signal in the same sample confirms again that no thermally induced signal is present in the PI spectra, so we can eliminate thermal effects from the discussion. To obtain information about the recombination dynamics of the PI carriers in the $`x=0`$ AFM sample, the integrated intensity of the PI absorption peak was measured as a function of the laser fluence and is shown in Fig. 3a. It can clearly be seen that the integrated intensity of the peak is not proportional to the laser fluence $`\mathrm{\Phi }`$, but shows a square root dependence on the laser fluence $`\frac{\mathrm{\Delta }𝒯_{PI}}{𝒯}\sqrt{\mathrm{\Phi }}`$. The temperature dependence of the PI-absorption-peak integrated intensity in the $`x=0`$ AFM sample is shown in Fig. 3b. The integrated intensity quickly diminishes with increasing temperature disappearing between 80 and 100K. There is no significant shift of the PI-absorption peak observed with increasing temperature and no changes in the PI-spectra are observed around Neel temperature $`T_N`$. ## III Discussion When a photon of visible light is absorbed in the sample at first the primary hole-electron pair is created. In LaMnO<sub>3</sub> at the incoming photon energy 2.4 eV the hole-electron pair corresponds to a charge transfer from the occupied O $`2p`$ derived bands to the unoccupied Mn $`e_g`$ and Mn $`t_{2g}`$ derived bands. The primary electron and hole are expected to relatively quickly relax by exciting secondary lower energy hole-electron pairs among other low energy excitations (phonons, magnons), since the transport gap of 0.25 eV is almost ten times smaller than the primary pair energy. The observed laser fluence dependence of the integrated intensity of the PI absorption peak shown in Fig. 3a indicates that the photo-excited particle density is proportional to the square root of the laser fluence. In the simplest model the photo-excited particle density $`n_{pe}`$ is governed by: $`{\displaystyle \frac{dn_{pe}}{dt}}=\alpha \mathrm{\Phi }r\text{,}`$ where $`\alpha `$ is constant, $`\mathrm{\Phi }`$ laser fluence and $`r`$ recombination rate. In steady state (the laser photoexcitation is pseudo-continuous in our experiment) $`n_{pe}`$ is time independent and taking ito account experimental fact that $`n_{pe}\sqrt{\mathrm{\Phi }}`$ it follows $`r\mathrm{\Phi }n_{pe}^2\text{.}`$ This clearly indicates a biparticle recombination process where two independent photoexcited particles interact during recombination. The observed PI absorption peak therefore most likely corresponds to the excitations of individual electron-like and/or hole-like charge carriers created during relaxation of the primarily photoexcited electron-hole pairs. Similar as in high $`T_c`$ cuprates, in (La<sub>1-x</sub>Sr<sub>x</sub>Mn)<sub>1-δ</sub>O<sub>3</sub> a PI signal of significant magnitude is observed only in the lower range of hole doping. Since the laser photoexcitation and measurement are pseudo-continuous, the photoexcited carrier lifetimes need to be quite long for any significant photoexcited carrier density to build up. Thus the absence of the PI signal at higher doping levels may not necessarily mean the absence of photoinduced carriers or polarons, but is more likely that it signifies shorter PE lifetimes. As a consequence we mainly focus on the $`x=0`$ samples in the rest of discussion where the PI signal is observed. In (LaMn)<sub>1-δ</sub>O<sub>3</sub> the majority of Mn ions have one electron in the split $`e_g`$ orbitals surrounded by a static JT distortion of the MnO<sub>6</sub> octahedra. If the photoexcitation results in an additional photoexcited hole in the occupied orbital, or an additional photoexcited electron is put into the second JT split empty $`e_g`$ orbital, the Jahn-Teller mechanism is disabled and the JT lattice deformation is reduced around this site. The photoexcited charge carriers can thus form anti-JT polarons, which behave very much like ordinary polarons. One expects the characteristic shape of the PI absorption is the same as in the case of normal polarons. Indeed we find that the shape of the observed PI absorption peak is consistent with the theoretically predicted absorption due to a photon assisted hopping of small polarons as seen from the analysis that follows. In Fig. 2a a fit of absorption due to a small polaron hopping given by Emin is shown for both $`x=0`$ samples assuming that $`\alpha `$ $`\frac{\mathrm{\Delta }T}{T}`$: $`\alpha {\displaystyle \frac{1}{\mathrm{}\omega }}\mathrm{exp}({\displaystyle \frac{(2E_b\mathrm{}\omega )^2}{4E_b\mathrm{}\omega _{ph}}})`$ where $`\alpha `$ is the absorption coefficient, $`E_b`$ is the polaron binding energy, $`\omega `$ the incoming photon frequency and $`\omega _{ph}`$ the polaron phonon frequency. It can be seen that the theoretical prediction fits well to the data with the small polaron binding energies $`E_b=350\pm 8`$ meV in the $`x=0`$ AFM and $`E_b=200\pm 10`$ meV in the $`x=0`$ FM samples respectively. The polaron binding energies $`E_b`$ and polaron phonon frequencies $`\omega _{ph}`$ obtained from the fit are summarized in Table I. The obtained small polaron binding energies are similar to those inferred from transport measurements and decrease with increased hole doping in a similar manner. The frequencies of the polaron phonons from the fit are in the region of the oxygen related modes as expected. Except for the phonon bleaching we do not observe any photoinduced local modes (PILM) in our spectra. The reason might be that the polaron phonon frequencies (indicated by Table 1) fall below the frequency range where we can reliably measure photoinduced absorption features . Finally, it is instructive to compare the PI absorption spectrum with IR conductivity spectra of chemically doped compounds to see whether the PI carriers show any similarity to the carriers introduced by chemical means. Indeed, Okimoto et al. observe in the $`x=0.1`$ La<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> a broad absorption peaked around 0.5 eV which is absent at room temperature and increases in intensity with decreasing temperature. Consistent with our present interpretation they attribute it to localization of JT polarons at low temperatures and relate the peak energy of 0.5 eV to a JT polaron binding energy $`E_{pJT}`$. In addition, the reported absorption feature also shifts to lower energy as doping is increased in agreement with our photoinduced measurements. Midgap state with a similar peak energy and similar doping dependenence was also observed by Jung et al. in La<sub>1-x</sub>Ca<sub>x</sub>MnO<sub>3</sub>. On the other hand Machida et al. and Quijada et al. report temperature dependent features in optical spectra of manganites with peak energy in the $`11.5`$ eV range depending on cation composition on the rare earth (R) site. The temperature dependence of intensity and position of these peaks is consistent with small JT polaron disappearance below T<sub>C</sub>. The peak energies in these experiments are much higher than $`2E_b`$ obtained from transport, Raman scattering and extrapolation of our data. Therfore they can not be directly related to the polaron hopping from Mn<sup>3+</sup> to Mn<sup>4+</sup> ion at energy $`2E_b`$ as suggested in ref. but rather to the charge transfer between $`e_g`$ orbitals on neighboring Mn<sup>3+</sup> ions at energy $`2E_b+U`$. Here $`U`$ is the $`e_ge_g`$ onsite Coloumb repulsion. This is additionaly suported by the existence of two midgap peaks in La<sub>1-x</sub>Ca<sub>x</sub>MnO<sub>3</sub>, one in the range $`0.20.8`$ eV and another in the range $`1.11.6`$ eV depending on $`x`$ attributed to the photon assisted polaron hopping from Mn<sup>3+</sup>to the unoccupied neighboring Mn<sup>4+</sup> and charge transfer from Mn<sup>3+</sup> the occupied neighboring Mn<sup>3+</sup> respectively. ## IV Conclusions A photoinduced midinfrared absorption peak is observed in weakly hole doped (La<sub>1-x</sub>Sr<sub>x</sub>Mn)<sub>1-δ</sub>O<sub>3</sub> and is attributed to absorption due to photon assisted hopping of anti-JT polarons. The theoretical model for a small-polaron absorption fits well to the experimental data indicating that the anti-JT polarons are small with polaron binding energies of 200-350 meV, decreasing with increasing hole doping. The polaron-phonon frequencies are suggested to be in the 200-300 cm<sup>-1</sup> frequency range. We conclude by noting that the values of binding energies inferred from our data are in good agreement with previous transport and optical measurements, exhibiting a similar trend of decreasing with increasing hole doping. ## V Acknowledgments We would like to thank V.V. Kabanov for fruitful discussions. ## VI Figure Captions Figure 1. The IR transmittance of the four samples with different Sr doping and cation deficiency. The shapes of the spectra beyond $`3000`$ cm<sup>-1</sup> are influenced by light scattering in the pellet. The inset shows the real part of the AC susceptibility as a function of temperature of the four samples. The labeling is the same as in the main panel. Figure 2. Low temperature ($`T=25`$K) a) PI transmittance and b) TD transmittance spectra as a function of doping. The spectra are vertically shifted for clarity and the thin lines represent the zero for each spectrum. Thick lines represent the small polaron absorption fit to the data. Figure 3. The integrated PI absorption intensity in the $`x=0`$ AFM sample as a function of a) the laser fluence $`\mathrm{\Phi }`$ and b) temperature $`T`$. The solid line in the panel a) is $`\sqrt{\mathrm{\Phi }}`$ fit to the data and the dashed line in the panel b) is a guide to the eye.
no-problem/9909/hep-ph9909367.html
ar5iv
text
# Pre-resonant Charmonium - Nucleon Cross Section in the Model of the Stochastic Vacuum ## 1 Introduction Charmonium nucleon cross sections are of crucial importance in the context of Quark Gluon Plasma (QGP) physics . One needs to know the cross section $`\sigma _{c\overline{c}N}`$ in order to explain nuclear suppression of $`J/\mathrm{\Psi }`$ in terms of ordinary absorption by nucleons without assuming a so called “deconfining regime”. Estimates using perturbative QCD give values which are too small to explain the observed absorption conventionally, but they are certainly not reliable for that genuine nonperturbative problem. A nonperturbative estimate may be tried by applying vector dominance to $`J/\mathrm{\Psi }`$ and $`\mathrm{\Psi }^{}`$ photoproduction. In this way a cross section of $`\sigma _{J/\psi }1.3`$ mb for $`\sqrt{s}10`$ GeV and $`\sigma _\psi ^{}/\sigma _{J/\psi }0.8`$ has been obtained . A more refined multichannel analysis leads to $`\sigma _{J/\psi }34`$ mb. The fact that the absorption cross section seems to be nearly the same both for $`J/\psi `$ and $`\psi ^{}`$ has been interpreted as meaning that what is really absorbed is rather a pre-resonant $`c\overline{c}`$ state and not the physical particles. The size of this state has been estimated to be $$r_8=\frac{1}{\sqrt{2m_c\mathrm{\Lambda }_{QCD}}}=0.20.25\text{fm}$$ (1) and its cross section was then calculated with short distance QCD. A value of $`\sigma _85.66.7`$ mb was found. In this note we calculate the pre-resonant $`c\overline{c}`$ \- nucleon cross sections in the model of the stochastic vacuum (MSV) . It has been applied to a large number of hadronic and photoproduction processes with remarkably good success. ## 2 The Model of the Stochastic Vacuum The basis of the MSV is the calculation of the scattering amplitude of two colourless dipoles based on a semiclassical treatment developped by Nachtmann . The dipole-dipole scattering amplitude is expressed as the expectation value of two Wegner-Wilson loops with lightlike sides and transversal extensions $`\stackrel{}{r}_{t1}`$ and $`\stackrel{}{r}_{t2}`$ respectively. This leads to a profile function $`J(\stackrel{}{b},\stackrel{}{r}_{t1},\stackrel{}{r}_{t2})`$ from which hadron-hadron scattering amplitudes are obtained by integrating over different dipole sizes with the transversal densities of the hadrons as weight functions according to $`\sigma _{(c\overline{c})N}^{tot}={\displaystyle d^2bd^2r_{t1}d^2r_{t2}}`$ $`\times \rho _{(c\overline{c})N}(\stackrel{}{r}_{t1})\rho _N(\stackrel{}{r}_{t2})J(\stackrel{}{b},\stackrel{}{r}_{t1},\stackrel{}{r}_{t2})`$ (2) Here $`\rho _{(c\overline{c})N}(\stackrel{}{r}_{t1})`$ and $`\rho _N(\stackrel{}{r}_{t2})`$ are the transverse densities of the pre-resonant charmonium state and nucleon respectively. The basic ingredient of the model is the gauge invariant correlator of two gluon field strength tensors. The latter is characterized by two constants: the value at zero distance, the gluon condensate $`<g^2FF>`$, and the correlation length $`a`$. We take these values from previous applications of the model (and literature quoted there): $`<g^2FF>=2.49\mathrm{GeV}^4`$ and $`a=0.346\mathrm{fm}`$. The wave functions of the proton have been determined from proton-proton and proton-antiproton scattering respectively. It turns out that the best description for the nucleon transverse density is given by that of a quark - diquark system with transversal distance $`\stackrel{}{r}_t`$ and density: $$\rho _N(\stackrel{}{r_t})=|\mathrm{\Psi }_p(\stackrel{}{r}_t)|^2=\frac{1}{2\pi }\frac{1}{S_p^2}e^{\frac{|\stackrel{}{r}_t|^2}{2S_p^2}}.$$ (3) The value of the extension parameter, $`S_p=0.739\text{fm}`$, obtained from proton-proton scattering agrees very well with that obtained from the electromagnetic form factor in a similar treatment. We start estimating the cross section in the case where the $`c\overline{c}`$ pair is already in the physical $`J/\psi `$ or $`\psi ^{}`$ states. The physical wave functions can be obtained in two different approaches: 1) A numerical solution of the Schroedinger equation with the standard Cornell potential : $$V=\frac{4}{3}\frac{\alpha _s}{r}+\sigma r$$ (4) 2) A Gaussian wave function determined by the electromagnetic decay width of the $`J/\mathrm{\Psi }`$ which has been used in a previous investigation of $`J/\mathrm{\Psi }`$ photoproduction . The linear potential can be calculated in the model of the stochastic vacuum which yields the string tension: $$\sigma =\frac{8\kappa }{81\pi }<g^2FF>a^2=0.179\text{GeV}^2$$ (5) where the parameter $`\kappa `$ has been detemined in lattice calculations to be $`\kappa =0.8`$ . The other parameters, the charmed (constituent) mass and the (frozen) strong coupling can be adjusted in order to give the correct $`J/\mathrm{\Psi }`$ and $`\mathrm{\Psi }^{}`$ mass difference and the $`J/\psi `$ decay width: $$m_c=1.7\mathrm{GeV}\alpha _\mathrm{s}=0.39$$ (6) We also use the standard Cornell model parameters : $$\alpha _s=0.39\sigma =0.183\text{GeV}^2m_c=1.84\text{GeV}$$ (7) From the numerical solution $`\psi (|\stackrel{}{r}|)`$ of the Schroedinger equation the transversal density is projected: $$\rho _{J/\mathrm{\Psi }}(\stackrel{}{r}_t)=\left|\psi (\sqrt{\stackrel{}{r}_t^2+r_3^2})\right|^2𝑑r_3$$ (8) where $`\stackrel{}{r}_t`$ is the $`J/\mathrm{\Psi }`$ transversal radius. Given the values of $`\alpha _s`$, $`\sigma `$ and $`m_c`$ we solve the non-relativistic Schroedinger equation numerically, obtain the wave function, compute the transverse wave function and plugg it into the MSV calculation . In the pre-resonance absorption model, the pre-resonant charmonium state is either interpreted as a color-octet, $`(c\overline{c})_8`$, and a gluon in the hybrid $`(c\overline{c})_8g`$ state, or as a coherent $`J/\mathrm{\Psi }\mathrm{\Psi }^{}`$ mixture. For the pre-resonant state we use a gaussian transverse wave function, as in Eq. (3), to represent a state with transversal radius $`\sqrt{<r_t^2>}=\sqrt{2}S_\psi `$. $`S_\psi `$ is the pre-resonance extension parameter analogous to $`S_p`$. The relation between the average transverse radius and the average radius is given by: $$\sqrt{<r_t^2>}0.82\sqrt{<r^2>}$$ (9) With the knowledge of the wave functions and transformation properties of the constituents we can compute the total cross section given by the MSV. The resulting nucleon - pre-resonant charmonium state cross section will be different if the pre-resonant charmonium state consists of entities in the adjoint representation (as $`(c\overline{c})_8g`$) or in the fundamental representation (as a $`J/\mathrm{\Psi }\mathrm{\Psi }^{}`$ mixture), the relation being: $$\sigma _{\mathrm{adjoint}}=\frac{2N_C^2}{N_C^21}\sigma _{\mathrm{fundamental}}$$ (10) with $`N_C=3`$ ## 3 Results The results are shown in Table I. In this table $`\sqrt{<r^2>}`$ is the root of the mean square distance of quark and antiquark. Wave function A) is the one obtained with the parameters given by Eqs. (5) and (6). Wave function B) corresponds to the standard Cornell model parameters, Eq. (7). Wave function C) gives the result for the $`J/\mathrm{\Psi }N`$ cross section obtained with the weighted average of the longitudinally and transversely polarized $`J/\mathrm{\Psi }`$ wave functions from with transversal sizes $`\sqrt{<r_t^2>}=0.327\text{fm}`$ and 0.466 fm. | Wave function | $`\sqrt{<r^2>}`$ fm | $`\sigma _{tot}`$ \[mb\] | | --- | --- | --- | | $`J/\mathrm{\Psi }(1S)`$ | | | | A | 0.393 | 4.48 | | B | 0.375 | 4.06 | | C | | 4.69 | | $`\mathrm{\Psi }(2S)`$ | | | | A: | 0.788 | 17.9 | TABLE I $`J/\mathrm{\Psi }N`$ and $`\mathrm{\Psi }^{}N`$ cross section Averaging over our results for different wave functions, our final result for the $`J/\mathrm{\Psi }N`$ cross section is $$\sigma _{J/\psi N}=4.4\pm 0.6\text{mb}$$ (11) The error is an estimate of uncertainties coming from the wave function and the model. Other nonperturbative calculations of the $`J/\mathrm{\Psi }N`$ cross section were presented in ref., where the value $`\sigma _{J/\psi N}=3.6`$ mb was found, and in ref., which reported $`\sigma _{J/\psi N}=2.8`$ mb. Our result is somewhat larger but still in agreement with these numbers. For $`\mathrm{\Psi }^{}`$ our cross section is slightly smaller than $`\sigma _{\psi ^{}N}=20.0`$ mb, obtained in but larger than $`\sigma _{\psi ^{}N}=10.5`$ mb as found in . In Table II we show the results for the absorption cross section of the pre-resonant charmonium state, interpreted as the color-octet, $`(c\overline{c})_8g`$ and as the coherent $`J/\mathrm{\Psi }\mathrm{\Psi }^{}`$ mixture for different values of the average squared radius. | $`\sqrt{<r^2>}`$ | $`\sigma _{c\overline{c}}`$ | $`\sigma _{(c\overline{c})_8g}`$ | | --- | --- | --- | | (fm) | (mb) | (mb) | | 0.24 | 1.79 | 4.02 | | 0.31 | 2.76 | 6.21 | | 0.37 | 3.96 | 8.91 | | 0.43 | 5.30 | 11.92 | | 0.49 | 6.81 | 15.32 | | 0.55 | 8.50 | 19.12 | | 0.61 | 10.28 | 23.13 | TABLE II The cross section pre-resonant charmonium-nucleon From our results we can see that a cross-section $`\sigma _\psi ^{abs}67`$ mb, needed to explain the $`J/\mathrm{\Psi }`$ and $`\mathrm{\Psi }^{}`$ suppression in p-A collisions in the pre-resonance absorption model , is consistent with a pre-resonant charmonium state of size $`0.500.55`$ fm if it is a $`J/\mathrm{\Psi }\mathrm{\Psi }^{}`$ mixture or $`0.300.35`$ fm for a $`(c\overline{c})_8g`$ state. This last estimate can be compared with $`r_8`$ and $`\sigma _8`$ quoted above. For $`\sqrt{<r^2>}=r_80.25`$ fm we obtain a cross section of $`4`$ mb instead of $`6.7`$ mb, as obtained in . Inspite of the uncertainty in these numbers we can see that our calculation leads to smaller values for the cross sections. Alternatively, we may reverse the argument and say that the pre-resonant octet state must have a larger radius than previously estimated. This seems to be unlikely, especially in view of the estimates of sizes and lifetimes performed in . This conclusion will become even stronger with the inclusion of medium effects. In a hadronic medium, the QCD vacuum parameters may change. Indeed, lattice calculations show that both the correlation length and the quark and gluon condensates tend to decrease in a dense (or hot) medium. The first consequence, largely explored in cross section calculations, is the change of hadron masses . The second consequence is a reduction of the string tension, $`\sigma `$, which leads to two competing effects, which can be quantitatively compared in the MSV. On one hand the cross section tends to decrease strongly when the gluon condensate or the correlation length decrease. On the other hand, when the string tension is reduced the $`c\overline{c}`$ state becomes less confined and will have a larger radius, which, in turn, would lead to a larger cross section for interactions with the nucleons in the medium. In the MSV we can determine which of these effects is dominant. Although all the calculations are done numerically, we can parametrize the dependence of the cross sections on some specific quantities. We have therefore the following three possibilities to express the cross section as a function of the string tension, $`\sigma `$, the correlation length, $`a`$, and the gluon condensate, $`<g^2FF>`$ : $$\sigma _{\psi N}\{\begin{array}{c}\sigma ^{5/6}a^{5/2}\\ \sigma ^{25/12}<g^2FF>^{5/4}\\ <g^2FF>^{5/6}a^{25/6}\end{array}$$ (12) From the equation above we see that the final effect of the medium is a reduction in the cross section. Using the values of the correlation length and the gluon condensate reduced by 10%: $`a=0.31`$ fm , $`g^2FF=2.25`$ GeV<sup>4</sup>, we obtain a 40% reduction in the cross sections. Taking this reduction into account the absorption cross sections obtained both for the physical $`J/\psi `$ (Eq. (11)) and for the $`J/\psi \psi ^{}`$ mixture (second column in Table II) are smaller than the ones needed in Refs. , or to explain experimental data. The absorption cross section of the hybrid $`(c\overline{c})_8g`$ state, even after the inclusion of medium effects, is still compatible (although somewhat small) with the values quoted in the mentioned papers. To summarize, we calculated the nonperturbative $`J/\mathrm{\Psi }N`$ and $`\mathrm{\Psi }^{}N`$ cross sections with the MSV. We obtain $`\sigma _{J/\psi N}4`$ mb and $`\sigma _{\psi ^{}N}18`$ mb. An interesting prediction of the MSV is the strong depedence on the parameters of the QCD vacuum which will most likely lead to a drastic reduction of them at higher temperatures and perhaps also at higher densities.
no-problem/9909/nucl-th9909048.html
ar5iv
text
# On the search for a narrow penta-quark Z+ baryon in NN interactionsSupported by Forschungszentrum Jülich and the Australian Research Council ## Abstract The possibility for an observation of a narrow penta-quark $`Z^+`$ baryon in $`NN`$ reactions is discussed. It is shown that the $`ppn\mathrm{\Sigma }^+K^+`$ reaction at excess energies around 100 MeV above threshold provides optimal conditions for $`Z^+`$ baryon detection by an analysis of the $`nK^+`$ invariant mass spectrum, if the $`Z^+`$ mass is located around 1.5 GeV involving a rather narrow width. The standard valence quark model builds up the low-lying baryons from three valence quarks $`qqq`$ with strangeness from $`S=3`$ to $`S=0`$, that are surrounded by a meson cloud or strong $`q\overline{q}`$ vacuum polarizations. However, also Fock states involving additional $`q\overline{q}`$ components are allowed which then can appear as baryonic resonances. The inverse life time of these excited states is proportional to the phase space for the decays allowed by quantum numbers. Interesting excitations are possible for a $`qqqq\overline{q}`$ configuration which allows to construct $`S=+1`$ states denoted by $`Z`$ baryons. The observation of $`Z`$ baryons thus provides an unambiguous signal that the standard valence quark model has to be extended to a larger Fock space. An experimental search for $`Z`$ baryons was started in 1966 at the BNL by the observation of a clear resonance peak in the $`K^+p`$ and $`K^+d`$ total cross section at kaon momenta around 0.9$`÷`$1.3 GeV/c. This novel baryonic resonance with strangeness $`S=+1`$, mass $`M_Z`$ 1.91 GeV and width $`\mathrm{\Gamma }_Z`$ 180 MeV was interpreted as a SU(3) antidecuplet together with the $`N_{11}^{}`$(1480). A review on the further experimental and theoretical activities and an evidence for strangeness $`S=+1`$ baryon resonances is given by the PDG in Ref. . Summarizing 20 years of experimental activity on $`S=+1`$ baryons it is important to note that almost all searches were performed by $`KN`$ elastic and inelastic scattering at kaon momenta corresponding to $`Z`$ baryons in the mass range of 1.74$`M_Z`$2.16 GeV. Furthermore, the resonance total widths reported in Ref. are very large and range from 70 MeV to 845 MeV. Furthermore, an expectation for the heavy $`Z`$ resonances is supported by the MIT bag model , where the lightest $`Z^+`$ baryon has a mass of 1.7 GeV and a smaller width. On the other hand, in the large $`N_c`$ limit of QCD baryons emerge as soliton configurations that have to be projected on proper quantum numbers. In this framework the exotic pentaquark $`Z^+`$ is the lightest member of the antidecuplet of baryons and arises naturally as a rotational excitation of the classical soliton (cf. Refs. ). For the most recent analysis of the $`Z^+`$ baryon properties we refer the reader to Refs. . The theoretical predictions for the $`Z^+`$ mass (and width) in the soliton models vary in a wide range. Whereas the calculations in the Skyrme model of Ref. predict a $`Z^+`$ mass of 1.7 GeV, the analysis within the framework of the chiral quark-soliton model suggests a $`Z^+`$ mass around 1.5 GeV and quite narrow width $`\mathrm{\Gamma }_Z`$15 MeV due to specific contributions of soliton rotation (see for details Ref. ). The detailed analysis of the $`Z^+`$ width in the chiral quark soliton model indeed shows that the most favorable width should be about 5 MeV . In this letter we will explore in particular if such a state might be detected in $`NN`$ scattering. Furthermore, we point out that Ref. predicts $`M_Z`$=1.58 GeV and a width $`\mathrm{\Gamma }_Z`$=100$`\pm `$30 MeV. Thus the most recent calculations suggest a low mass of the $`Z^+`$ baryon in a range that has not been investigated before experimentally . Here we study the possibility of $`Z^+`$ ($`I=0`$, $`S=+1`$, $`J^P=\frac{1}{2}^+`$) observation in $`NN`$ collisions, which can be performed at the COoler SYchlotron (COSY) in Jülich. Furthermore, we explore the effects due to a narrow $`Z^+`$ resonance in the invariant mass spectra. Since the $`Z^+`$ resonance couples only to the $`NK`$ system it can be excited in real or virtual $`K^+n`$ or $`K^0p`$ scattering. Unfortunately, there are no data available on the real $`K^+n`$ scattering for momenta $`p_K<600`$ GeV/c, which correspond to the mass range $`M_Z1.58`$ GeV of our interest. On the other hand, data exist on the $`K_LpK_Sp`$ reaction that can be analyzed in order to evaluate the $`Z^+`$ baryon properties. A virtual $`Z^+`$ excitation might be tested in $`ppn\mathrm{\Sigma }^+K^+`$, $`ppp\mathrm{\Sigma }^+K^0`$, $`pnn\mathrm{\Lambda }K^+`$ and $`pnp\mathrm{\Lambda }K^0`$ reactions through $`K`$-meson exchange. Furthermore, reactions with a neutron in the initial state basically are performed with a deuteron target which might not be suitable for a measurement of the $`Z^+`$ signal in the $`NK`$ invariant mass; if the $`Z^+`$ is a narrow resonance, then an averaging over the deuteron spectral function might substantially distort the $`Z^+`$ signal. In principle this problem can be resolved by an additional measurement of the spectator nucleon of the deuteron and a full kinematical reconstruction of the final states. However, here we suggest to use the $`K^+`$ production channel, and also provide a motivation for the advantages of the $`ppn\mathrm{\Sigma }^+K^+`$ reaction in searching the $`Z^+`$ baryon. The major uncertainty in calculations of the contribution from $`K`$-meson exchange to the $`ppn\mathrm{\Sigma }^+K^+`$ reaction is due to the poor knowledge of the $`N\mathrm{\Sigma }K`$ coupling constants. The analysis of the available data only provides $`3.5<g_{N\mathrm{\Sigma }K}<6.4`$ , where the upper limit stems from a dispersion analysis, which might be considered as an almost model independent evaluation of the coupling constant from data. The SU(3) limit predicts $`3.2<g_{N\mathrm{\Sigma }K}<4.6`$ which, within a given uncertainty, is in reasonable agreement with the $`N\mathrm{\Sigma }K`$ coupling extracted from the experimental data in Ref. . In the following calculations we will use $`g_{N\mathrm{\Sigma }K}`$=3.86 as given by SU(3) with a mixing determined from the semileptonic hyperon decay . Now, if the $`Z^+`$ baryon is a narrow state, its contribution to the total $`ppn\mathrm{\Sigma }^+K^+`$ cross section is proportional to the overlap between the $`Z^+`$ spectral function (taken in Breit-Wigner form) and the phase space available. The reaction phase space $`R_3`$ increases with invariant collision energy $`\sqrt{s}`$ as $`R_3ϵ^2`$, where $`ϵ=\sqrt{s}m_Nm_\mathrm{\Sigma }m_K`$. However, the $`Z^+`$ contribution saturates at energies slightly above $`ϵ`$=$`M_Zm_Nm_K+3\mathrm{\Gamma }_Z`$<sup>1</sup><sup>1</sup>1By taking three standard deviations., where $`M_Z`$ and $`\mathrm{\Gamma }_Z`$ are the mass and the width of the $`Z^+`$ resonance, respectively. Thus for $`M_Z`$=1.5 GeV the threshold for $`Z^+`$ production at its pole is $`ϵ`$66 MeV and the optimal ratio of the $`Z^+`$ contribution to the total production cross section – due to other processes – is obtained at energies not far from threshold simply due to phase space arguments. Obviously, a difficulty in the observation of $`Z^+`$ production in the $`NNNYK`$ reaction is due the dominance of other processes since the contribution from the $`Z^+`$ occurs only in a small part of the available final phase space, which lies around the invariant mass of the $`NK`$ system close to the $`Z^+`$ resonance mass. Thus it is important that the range of the $`NK`$ invariant masses dominated by $`Z^+`$ production is not affected by the intermediate $`YK`$ resonances and the $`NY`$ final state interaction (FSI), because they can modify substantially the final observables relatively to the pure isotropic phase-space distributions as known both experimentally and theoretically . The most dominant channel in the $`ppn\mathrm{\Sigma }^+K^+`$ reaction is given by the intermediate $`\mathrm{\Delta }^{++}(1920)`$ resonance. Our further motivations can be easily illustrated by Fig. 1, that shows the Dalitz plot for the $`ppn\mathrm{\Sigma }^+K^+`$ reaction at an excess energy $`ϵ`$=100 MeV calculated according to Ref. and additionally taking into account the FSI as well as the $`Z^+`$ contribution. The Dalitz plot is shown as a function of the $`\mathrm{\Sigma }^+K^+`$ and $`n\mathrm{\Sigma }`$ invariant masses. Furthermore, the $`\mathrm{\Sigma }^+K^+`$ invariant mass distribution ranges from $`m_\mathrm{\Sigma }+m_K`$ up to $`m_\mathrm{\Sigma }+m_K+ϵ`$, and is enhanced at large masses due to the $`\mathrm{\Delta }^{++}(1920)`$ resonance. Notice, that large $`\mathrm{\Sigma }^+K^+`$ invariant masses correspond to small $`n\mathrm{\Sigma }^+`$ invariant masses that are enhanced due to the FSI . The size of the squares in Fig. 1 is proportional to the production cross section. The solid line in Fig. 1 shows the trace of the $`Z^+`$ baryon pole with $`M_Z`$=1.5 GeV while the distribution along this line indicates the calculated contribution from the $`Z^+nK^+`$ resonance assuming a width $`\mathrm{\Gamma }`$=5 MeV. Note that the $`Z^+`$ contribution to the $`nK^+`$ invariant mass spectrum at $`M_{NK}`$=$`M_Z`$ is obtained by integrating along the solid line in Fig. 1, and in principle can be well separated from the region dominated by the $`\mathrm{\Delta }^{++}(1920)`$ resonance and the FSI by applying appropriate cuts. The contribution from the $`Z^+`$ baryon then can be detected as an enhancement in the Dalitz plot along the expected $`Z^+`$ resonance pole. Indeed, as illustrated by Fig. 1, such an enhancement should be detectable. Without performing any cuts we show the invariant mass spectra of the $`nK^+`$ system produced in the $`ppn\mathrm{\Sigma }^+K^+`$ reaction at $`ϵ`$=100 MeV and 200 MeV in Fig. 2. It is important to note that the total $`ppn\mathrm{\Sigma }^+K^+`$ cross section calculated at $`ϵ`$=100 MeV and 200 MeV amounts to 2 $`\mu `$b and 12.7 $`\mu `$b, respectively, while the contribution from the $`Z^+`$ is around 80 nb and 120 nb, respectively. As discussed above, the ratio of the $`Z^+`$ contribution to the total $`ppn\mathrm{\Sigma }^+K^+`$ cross section substantially decreases with increasing beam energy due to phase space. The hatched histograms in Fig. 2 show the $`Z^+`$ contribution, the dashed histograms indicate the contribution from other processes, which in the following we discuss as background to $`Z^+`$ baryon production. The solid histograms show the total $`nK^+`$ invariant mass spectra. The contribution from the $`Z^+`$ baryon is well visible at $`ϵ`$=100 MeV, while it becomes almost invisible at an excess energy of 200 MeV. Although the $`Z^+`$ contribution to the total $`ppn\mathrm{\Sigma }^+K^+`$ cross section at $`ϵ`$=100 MeV is very small, it can be detected in the $`nK^+`$ invariant mass spectrum in case it has a narrow width. It is important to note again, that lower masses of the $`Z^+`$ baryon can be detected more easily since the background is reduced in this case. Opposite considerations hold for higher $`Z^+`$ masses due to a larger width and background, respectively. In order to improve the signal from the $`Z^+`$ baryon one can cut the regions dominated by the intermediate $`\mathrm{\Delta }^{++}(1920)`$ resonance and the FSI. In this respect we show in Fig. 3 the $`nK^+`$ invariant mass spectra at $`ϵ`$=100 MeV and 200 MeV for the cuts, $`M_{\mathrm{\Sigma }K}<`$1.76 MeV and $`M_{N\mathrm{\Sigma }}>`$2.15 MeV. Now the $`Z^+`$ contribution becomes more pronounced even at an excess energy of 200 MeV. In summary, the present study indicates the possibility for an observation of the penta-quark $`Z^+`$ baryon by the $`K^+n`$ invariant mass spectra, produced in the $`ppn\mathrm{\Sigma }^+K^+`$ reaction at excess energies around 100 MeV. Our estimate suggests that within an experimental statistics of about 200 events the $`Z^+`$ signal might be detected at $`ϵ`$100 MeV. We note that our estimate can be considered conservative, because a small $`N\mathrm{\Sigma }K`$ coupling constant, $`g_{NK\mathrm{\Sigma }}=3.86`$ , is used in the calculations. On the other hand, we admit that our estimate is based on a narrow $`Z^+`$ baryon width. If the $`Z^+`$ resonance has a wider width or a higher mass, the data analysis of the $`Z^+`$ signal will become more uncertain. The authors like to thank W. Eyrich for his interest and encouragement and acknowledge the discussions on the $`Z^+`$ properties with D. Diakonov and V. Petrov.
no-problem/9909/cond-mat9909289.html
ar5iv
text
# Thermodynamic chaos and infinitely many critical exponents in the Baxter-Wu model ## 1 Introduction In contrast to microscopic$`/`$classical chaos the physics behind thermodynamic chaos is far from being well understood. Defined to be the macroscopic state of a system with chaotically broken translational symmetry, this phenomena is one of the main characteristics of the glassy systems. Traditionally these systems investigated by introducing randomness in the microscopic parameters. Although in many cases this can be physically justified, it leaves the origin of randomness questionable, especially when microscopic parameters in turn are measured from macroscopic response of the system. Recent progress in low temperature physics shows that frustration and competing interactions are more important for thermodynamic chaos than the randomness itself. Thus the interest to the deterministic models of Statistical Mechanics (SM) where frustration and competing interactions are not the result of randomness. The antiferromagnetic Ising model on triangular lattices is a classical example of the model incorporating both frustration and competing interactions. In 2d, exact solution in the absence of magnetic field reveals that this model is not capable to exhibit thermodynamic chaos. Recently, we investigated the antiferromagnetic Ising model in a magnetic field on Husimi tree, which being approximation to regular lattices, allows to preserve frustrative nature of antiferromagnetic interaction. Although this model has an interesting phase structure, no thermodynamic chaos has been found. The other approximations preserving frustration also show no sign of thermodynamic chaos in triangular antiferromagnets. The reason is the following: frustration on a triangle results macroscopic degeneracy of the ground state, including chaotic configurations, but due to symmetry of the Hamiltonian these configurations are not observable at the macroscopic level. On the other hand, the magnetic field, which competes with antiferromagnetic interaction completely confines frustrations. Thus a special symmetry breaking field, other than uniform, is necessary for observing thermodynamic chaos in tr iangular antiferromagnets. The Baxter-Wu model in a magnetic field is one of the models where frustration and competing interactions present simultaneously. The thermodynamic chaos found in this model by Monroe has been investigated in some details by the present authors. Universal transition to chaos is one of the interesting phenomena which has been found. The main difficulties which one faces is that it is necessary to introduce infinitely many order parameters to describe the system in a chaotic phase. It is interesting to note that similar problem exist also in disordered models of spin glasses, namely one needs to introduce Parisi’s order parameter functional or local Edwards-Anderson order parameters in order to describe the system in a glassy phase. Fortunately, for the deterministic models one can use the thermodynamic formalism of dynamical systems to infer the universal characteristics in chaotic phases. In particular, distribution of the local Lyapunov exponents shows universal scaling behavior, similar to one found for a logistic map. Although similar universalities have been also found in the distribution of the local Edwards-Anderson order parameters of disordered models, as we shall see, in many cases disordered models are not capable to model thermodynamic chaos. In this paper we investigate the Baxter-Wu model in a complex magnetic field. At this point, it is interesting to draw some parallels between Quantum Mechanics$`/`$Quantum Field Theory (QFT) and SM. Investigation of the singularities of the scattering matrix in a complex energy$`/`$momentum plane reveals an information about bounded states and resonants, which otherwise is difficult to obtain. But in contrast to scattering matrix, singularities of thermodynamic quantities are not necessarily isolated. In fact, as it has been proven by Lee and Yang in 1952, zeros of the partition function of the 2d Ising model in a complex magnetic field lie on a curve, which tend to the real axis at the phase transition point and become dense in the thermodynamic limit. Soon after, Huang in his seminal textbook of SM rises a question about possibility of having SM model, zeros of the partition function of which pinch the real axis at some range instead of a single point. Recently, we investigated the Baxter-Wu model in a complex temperature plane. It has been shown that the Fisher zeros of this model densely fills a domain near the real axis. As we shell see, the Lee-Yang zeros of the Baxter-Wu model satisfy the criterion mentioned by Huang. This allows us to prove the existence of infinitely many critical exponents in this model. The paper is organized as follows. In Section 2 after introducing the Baxter-Wu model we discuss principal differences in modeling thermodynamic chaos by deterministic and disordered models. In Section 3 we present our numerical results for the phase structure of the Baxter-Wu model in a complex magnetic field and discuss the role of thermodynamic chaos on it. In Section 4 we present our conclusions. ## 2 The Baxter-Wu model The Hamiltonian of the Baxter-Wu model in a magnetic field has the following form: $$\mathrm{H}=J_3\underset{\mathrm{}}{}\sigma _i\sigma _j\sigma _kh\sigma _i$$ (1) where $`\sigma _i\{1;+1\}`$ are Ising variables, $`J_3`$ and $`h`$ are three-site interaction strength and magnetic field respectively. The first sum goes over all triangles and the second one over all sites. Like two-site interacting Ising model this model has multiple applications and is one of the few models of SM exactly solvable 2d. For the first time the solution has been found by Baxter and Wu in 1973 by using Bethe Ansatz method. Later, it has been shown that this solution is a particular case of more general solution of the eight-vertex model. Other interesting relations between Baxter-Wu model and other exactly solvable SM models can be obtained using generalized star-triangle relations. Recently, the solution by the Bethe Ansatz method has been generalized to include more general boundary conditions and an interesting conjecture in the framework of Conformal Field Theory has been proposed, viz. that the Baxter-Wu model and 4-state Potts model share the same operator contents. On a single triangle the ground state of the three-site interaction consist of configuration where all spins aligned at the same direction (up or down depending on the sigh of $`J_3`$) and of configurations obtained from this one by reserving the spins at arbitrary two sites of the triangle. This makes the ground state of the Baxter-Wu model highly degenerate. In particular, an arbitrary alignment of the spins along arbitrary direction can be achieved starting from the uniform configuration without altering the total energy of the system. By encoding these configurations in binary sequences it becomes clear that the ground state of the Baxter-Wu model involve uniform, modulated, as well as chaotic configurations. Note that third spin in the Hamiltonian of Baxter-Wu model acts like a (pseudo) random two-site interaction strength. As we mentioned in the introduction, the macroscopic degeneracy itself is not sufficient for thermodynamic chaos. In addition to macroscopic degeneracy a symmetry breaking field is necessary, which will pick up a particular chaotic configuration of the ground state when averaging over all configurations. As we shell see, in the Baxter-Wu model the magnetic filed with opposite sign to $`J_3`$ satisfies this criteria. To that end, let us consider the Baxter-Wu model on the Husimi tree, so that frustrative nature of the three-site interaction can be preserved and at the same time analytical expression for thermodynamic quantities can be obtained, even in the presence of magnetic field. The magnetization at the central site of the Husimi tree as a function of temperature $`T,`$ magnetic field $`h`$ and three-site coupling $`J_3`$ is given by $$m_n(T,h,J_3)=\frac{\mu x_n^\gamma 1}{\mu x_n^\gamma +1}$$ (2) where $`n`$ numbers the generation in the hierachies of Husimi trees, $`\gamma `$ is equal to the twice the coordination number and $`x_n`$ is given by the following recurrent relation $$x_n=f(x_{n1}),f(x)=\frac{z\mu ^2x^{2(\gamma 1)}+2\mu x^{\gamma 1}+z}{\mu ^2x^{2(\gamma 1)}+2z\mu x^{\gamma 1}+1}$$ (3) where $`z=e^{2J_3/kT}`$ and $`\mu =e^{2h/kT}`$. Initial condition for the recurrent relation (3) depends on the boundary conditions (e.g. $`x_0=1`$ corresponds to the free boundary condition). It is interesting to point out that the recurrent relation (3) enters also in the formulas for the expectation values of quantum mechanical operators in some field theoretical models. Depending on $`T,h`$ and $`J_3`$ the attractor of the map consist of a stable point, periodic cycles or strange attractors, so that our system is in uniform (paramagnetic of ferromagnetic), modulated or glassy phases respectively (Figure 1). Note that as far as the dynamic of the map (3) is concerned there is no difference whether it describes the dynamic of microscopic quantities or the distribution of macroscopic one. The only difference is in the interpretation of the results, so that in the former case it describes time evolution of microscopic quantities, whereas in the later case site to site variation of macroscopic quantities. For instance, the map which is involved in the formulas for magnetization of the ANNNI model on the Bethe lattice natural arises in the microscopic dynamic of a neuron with non-monotonic transfer function. Taking into account the discussion in the introduction, one can see that the large variety of phases, which are typical in low temperature physics and biophysics are the result of collective effects of frustrations and competing interactions. The main difficulties lie in the parameter space where the system exhibits thermodynamic chaos. In the uniform or modulated phases one can use conventional order parameters, such as magnetization, but at every bifurcation point the system undergoes a continuous phase transition and they number doubles. Thus in a chaotic phase we end up with infinitely many order parameters. To find a possible solution to this problem one can use an invariant measure $`P(m)`$ of the map (3) to define e.g. $`q=mP(m)`$ as an order parameter. But, as it is well known, like in axiomatic QFT or in stochastic dynamical systems, in general, even the first few momentums are not sufficient for describing the system. Thus we have to use the thermodynamic formalism of dynamical systems for the complete description of the system. Let us recall that using SM one anticipates to get rid of the large amount microscopic degrees of freedom and to avoid the solution of the complicated microscopic dynamics involving thermostat. The thermodynamic formalism of dynamical systems, on the other hand, allows one to obtain the quantities which describes the dynamical systems, e.g. the spectrum of Lyapunov exponents or generalized dimensions. By applying thermodynamic formalism of dynamical systems we do not anticipate to get rid of infinitely many order parameters. Instead, we obtain an information about our system in terms of Lyapunov exponents or generalized dimensions. That is the universalities in the distribution of these quantities which allows to avoid the measurement of infinitely many order parameters. To compare deterministic and disordered models of thermodynamic chaos let us consider the problem of modeling thermodynamic chaos in a computer. In order to distinguish the modulated phase with large period from a chaotic one we have to simulate the deterministic model on a very large lattice. On the other hand, we can simulate this system on a small lattice by a disordered model with appropriately chosen measure for random parameters. But since strange attractors of dynamical systems with a few exceptions (e.g. hyperbolic dynamical systems) have infinitely many invariant measures, disordered models are not capable to model thermodynamic chaos at all temperatures and boundary conditions. It is interesting to note that thermodynamic or special chaos can be observed in deterministic systems by changing boundary conditions only. ## 3 Lee-Yang singularities An attractive feature of disordered models is that it is believed that only a few set of critical exponents are needed to describe different phase transitions taking place in spin glasses. Complex temperature$`/`$field analysis is one of the tools used among many others to extract the critical exponents. Zeros of the partition function in the complex temperature$`/`$field plane coincide with the Julia set of renormalization group map and provide information about phase transition points and critical exponents. In particular, the density on the curve on which partition function zeros lie and the angle which this curve make with the real axis are directly related to the critical exponents. The fact that the system is in complex temperature and$`/`$or magnetic field does not mean that this system is unphysical or nonunitary. In fact, there are well know examples in the literature where duality or star-triangle relation map the system with real temperature and magnetic field into a system with complex temperature and$`/`$or magnetic field. Zeros of the partition function of the Baxter-Wu model on the Husimi tree satisfy the following relation $$\mu x^\gamma +1=0$$ (4) on the attractor of the map (3). In Figure 2 we plot the zeros of the partition function in a complex field for different temperatures. One can see at Figure 2a that zeros of the partition function approach to only a few set of critical points located on the real axis, whereas at low temperatures (Figures 2b) the Lee-Yang singularities densely fill domains near real axis which include patches of the real axis. This indicates a condensation of the phase transition points. In these domains there are infinitely many different ways leading to a given point in the real axis, which shows the existence of infinitely many critical exponents in the Baxter-Wu model. The existence of infinitely many critical exponents can be seen also from Figure 1. Since at every bifurcation point one can define critical exponents in terms of staggered magnetizations, in a chaotic phase there are infinitely many critical exponents. We reiterate that in a real experiment there is no need to measure neither infinitely many order parameters nor infinitely many critical exponents. ## 4 Conclusion In this work we studied thermodynamic chaos in the Baxter-Wu model. Although Baxter-Wu model shares many features of triangular antiferromagnets, the external magnetic field competes with the three-site interaction leaving the ground state highly degenerate, a phenomena which makes chaos observable at the macroscopic level. This phenomena is well known in disordered models of spin glasses, where similar result can be obtained by introducing randomness in the microscopic parameters (e.g. magnetic field or interaction strength). Nevertheless, disordered models, in general, fail to model thermodynamic chaos, since probabilistic measure (which in disordered models we have to chose phenomenologically) is not unique and varies depending on temperature and boundary conditions. Investigation of the Lee-Yang singularities revealed the existence of infinitely many critical exponents in the Baxter-Wu model. The problem with infinitely many order parameters and critical exponents is that in a real experiment one have to perform infinitely many measurements. Notice that a similar problem exist also in nonrenormalizable QFT, where nonrenormalizable interaction leads to infinitely many counter terms and corresponding coupling constants. As we mentioned, for the Baxter-Wu model we can use the thermodynamic formalism of dynamical systems, which allows us to completely describe the systems in chaotic phases by a few set of universal quantities (e.g. the slope in the distribution of the Lyapunov exponents). We hope that, duality relation between Baxter-Wu model and $`Z(2)`$ gauge symmetric model involving nonrenormalizable three-plaquette interaction, which we recently found, will help us to understand the connection between nonrenormalizable QFT and thermodynamic chaos. ## 5 Acknowledgments We thank B. Hu for the hospitality at the Centre for Nonlinear Studies at Hong Kong Baptist University. SKD thanks D. Shepelyansky for helpful discussion.This work is supported in part by grants INTAS-97-347 and A-102 ISTC project.
no-problem/9909/astro-ph9909464.html
ar5iv
text
# Long term behavior of a hypothetical planet in a highly eccentric orbit ## 1 Motivation The information on Earth’s past climate obtained from deep sea sediments and polar ice cores indicates that the global temperature oscillated with a small amplitude around a constant mean value for millions of years until about 3.2 million years ago. From then on the mean temperature decreased stepwise and the amplitude of the variations increased. For the last one million years Earth’s climate was characterized by a distinct 100 kyr periodicity in the advancing and retreating of the polar ice sheets, known as glacial–interglacial cycles. This Ice Age period ended abruptly 11.5 kyr ago. During the following time interval, the Holocene, the global mean temperature quickly recovered to a value comparable with that observed 3.2 Myr ago . Wölfli et al. proposed that during all this time an object of planetary size, called Z, existed in a highly eccentric orbit, and showed that Z could be the cause of these changes of Earth’s climate. Because of the postulated small perihelion distance, Z was heated up by solar radiation and tidal forces so that it was surrounded by a gas cloud. Whenever the Earth crossed this cloud, molecules activating a Greenhouse effect were produced in the upper atmosphere in an amount sufficient to transiently enhance the mean surface temperature on Earth. Very close flyby events even resulted in earthquakes and volcanic activities. In extreme cases, a rotation of the entire Earth relative to its rotation axis occured in response to the transient strong gravitational interaction. These polar shifts took place with a frequency of about one in one million years on the average. The first of them was responsible for the major drop in mean temperature, whereas the last one terminated Earth’s Pleistocene Ice Age 11.5 kyr ago. At present, planet Z does not exist any more. The high eccentricity of Z’s orbit corresponds to a small orbital angular momentum. This could be transferred to one of the inner planets during a close encounter, so that Z plunged into the sun. Alternatively, and more likely, Z approached the Earth to less than the Roche limit during the last polar shift event. In this case it was split into several parts which lost material at an accelerated rate because of the reduced escape velocity, so that eventually all of these fragments evaporated during the Holocene. Here, we describe the method used to calculate the motion of such a hypothetical object in the presence of the other planets of the solar system, and its close encounters with the Earth. The calculations neglect possible losses of mass, orbital energy and angular momentum of Z due to solar irradiation and tidal effects. We also disregard the disappearance of Z following the last polar shift event. Consequences of these effects are discussed in ref. . ## 2 The method In order to study the motion of an additional planet Z in the gravitational field of the sun and the other planets, a set of coupled ordinary differential equations (ODE) has to be integrated numerically starting at a given time with the known or assumed set of orbital parameters of all celestial bodies of interest. For the calculation we used the Pascal program Odeint which is based on the Bulirsch-Stoer method. It includes the modified midpoint algorithm with adaptive stepsize control and Richardson’s deferred approach to the limit . The start values for the known planets were taken from ref. , where they are given in barycentric rectangular coordinates and velocity components referred to the mean equator and equinox of J2000.0, the standard epoch, corresponding to 2000 January 1.5. All values are given in astronomical units (AU) and AU/day, respectively. To obtain the heliocentric coordinates referred to the ecliptic, we subtracted the coordinates of the sun and rotated the resulting values for all planets by 2326’21.448”, the angle between the mean equatorial plane and the ecliptic of J2000.0. The Earth and the Moon were treated separately. The masses of the planets are from ref. . For reasons explained in ref. we assume that Z was a mars–like object with $`M_Z=0.11M_E`$ and that its hypothetical orbit at the epoch J2000.0 is determined by the following heliocentric parameters: semi-major axis: $`a`$ $`=0.978`$ numerical eccentricity: $`e`$ $`=0.973`$ inclination: $`i`$ $`=0^{}`$ longitude of the perihelion: $`\mathrm{\Omega }`$ $`=0^{}`$ argument of the perihelion: $`\omega `$ $`=0^{}`$ mean anomaly: $`M`$ $`=270.0^{}`$ The calculation was a classical point-mass integration without relativistic corrections. In order to save computing time we ignored the influences from the three outermost planets Uranus, Neptune and Pluto, and restricted the numerical accuracy per integration step, the tolerance level $`eps`$, to a value of $`10^{13}`$ . Several tests were made to check whether these simplifications are acceptable or not. First of all, we evaluated the osculating orbital parameters for the Earth over the past 300 kyr without Z, but for three different tolerance levels, $`eps=10^{13},\mathrm{\hspace{0.33em}10}^{15}`$ and $`10^{16}`$, and found that the positions of all planets except that of Mercury were reproducible to within less than 100 km. A comparison of the Earth’s eccentricity and inclination variations with the corresponding values published by Berger and transferred to the invariable plane of the solar system by Quinn et al also showed good agreement. We also determined the total angular momentum of the solar system and found a negligible small linear change at the eleventh digit of its value. All calculations were performed with a 133 MHz PC. Including Z, about 15 h were required to cover a time span of 10 kyr with an accuracy of $`10^{13}`$. The sequence of close encounters of Z with the inner planets and the Moon amplify the errors so that the calculated result represents a possible orbit only. Integrations with Z were performed both, forward (+300 kyr) and backward (-450 kyr) in time relative to J2000.0 in order to demonstrate that the behavior of the orbital parameters of Z and the Earth were not affected by the change in time direction. ## 3 Results Fig. 1 shows the time dependent variations of the osculating orbital parameters of Z (semi-major axis, eccentricity, inclination, ascending node and argument of the perihelion) over a time period of 750 kyr. The inclination is defined here as the angle between Z’s orbital plane and the invariable plane, which is perpendicular to the orbital angular momentum of the solar system. It is close to the orbital plane of Jupiter. The movement of Z is strongly influenced by the inner planets and to a lesser extent by Jupiter. Of interest is the behavior of the inclination, which on average oscillates with a periodicity of about 7 kyr only. This period is more than an order of magnitude shorter than that of Earth’s inclination which is essentially determined by Jupiter and, without Z included in the calculation, amounts to 100 kyr. The corresponding osculating parameters for Earth’s orbit are displayed in Fig. 2; as mentioned above, they are only marginally disturbed by the presence of Z. Sudden jumps in one or several orbital parameter values of Z indicate close encounters with one of the inner planets. In order to find out how often Z approached the Earth to distances close enough to influence its climate or even to provoke a polar shift , we determined the distance between Z and the Earth for each integration step and fitted a parabolic function into the values close to the distance of closest approach. The minimum of this quadratic function was then identified with this distance. The upper panel of Fig. 3 shows the result of this evaluation for the Z–Earth system over the whole time range considered here. Plotted are all encounters with distances of less than 0.02 AU $`=310^6`$ km, as indicated by the endpoints of each vertical line. The lower panel of Fig. 3 shows details of the irregular structure of these encounters which are the result of the complex time dependence of the coordinates of Z and Earth. Not surprisingly, the encounter frequency is enhanced whenever the two inclinations nearly coincide. Fig. 4 (left side) shows that Z approaches the Earth many times to less than Moon’s distance. As explained in ref. these flyby events can excite strong earthquakes and volcanic activities. For distances smaller than 30’000 km such encounters could even result in a rotation of the Earth by as much as 20 with respect to the direction of the invariant angular momentum. Flyby events having distances larger than about the Moon-Earth distance are harmless in this respect, but they still may influence Earth’s climate. In ref. we have shown that Z was surrounded by a gas cloud which had an estimated radius of about 2.8 Mio. km at the intersection point with Earth’s orbit. The interaction of this cloud with Earth’s atmosphere produced Greenhouse gases in sufficient amounts to significantly increase the global temperature. Close encounters with the Earth also imply close encounters with the Moon. These are plotted on the right side of Fig. 4. Since the mass of the Moon is much smaller than that of Z, recoil effects are much larger than in the case of the Earth and, therefore, may significantly influence the Moon’s orbit. In fact, Fig. 5. shows events in which the semi-major axis $`a`$ suddenly changes at some given time by up to 9% relative to the mean value. Since, according to Kepler’s third law, the orbital period of the Moon is proportional to $`a^{3/2}`$, the orbital angular frequency $`\omega _{orb}`$ also changes, so that the apparent rotational frequency $`\mathrm{\Omega }=\omega _{rot}\omega _{orb}`$ becomes different from zero, assuming that $`\omega _{rot}`$, the rotational angular frequency, is not influenced by such an event. In ref. we propose that the last close encounter of Z with the Earth took place only 11.5 kyr ago, so that Moon’s orbit could also have changed at that time. Therefore, the question arises whether the tidal friction on the Moon is large enough to stop $`\mathrm{\Omega }`$ within the Holocene. In the appendix we show that this is likely to be the case. ## 4 Conclusion The calculations presented here show that an object of planetary size in a highly eccentric orbit approaches the Earth with sufficient frequency to influence its climate and even to produce polar shifts, the last of which terminated Earth’s Ice Age period, as explained in ref. . Although the pattern of these approaches compares well with the observed pattern, we have to point out that for various reasons a one to one correspondence between “theory” and “observation” cannot be expected: First, since Z no longer exists at present, we have to calculate its long-term behavior on the basis of assumed orbital parameters and on estimations of its mass. Second, Z lost substantial amounts of mass, orbital energy and angular momentum each time it passed through the perihelion. These effects are neglected in our point–mass model calculation. They may significantly alter the orbit of Z, and have to be included in an attempt to find out whether the disturbances of the orbits of the inner planets due to Z are within the boundaries set by present day observations. In ref. 3, we also point out that Z disappeared from the solar system either during the proposed polar shift event 11.5 kyr ago or later on during the Holocene. A detailed study of the mechanisms responsible for this removal is another important task. ## Appendix: Synchronising Moon’s rotation Since a close encounter between Z and the Moon can lead to an apparent rotational frequency $`\mathrm{\Omega }=\omega _{rot}\omega _{orb}`$ of the Moon which no longer vanishes on the average, it is important to know how fast tidal friction diminishes $`\mathrm{\Omega }`$. The tidal force field is parallel to the direction Moon-Earth and has the value $$F=\frac{2zGM_ER_M}{R^3}.$$ (1) $`G`$ is the gravitational constant, $`M_E`$ the mass of the Earth, $`R_M`$ the radius of the Moon, $`R`$ the distance between the centers of Earth and Moon, and $`z`$ the cartesian coordinate in the direction Moon–Earth, measured from Moon’s center. On Moon’s surface $`z=R_M\mathrm{cos}(\gamma )`$, where $`\gamma `$ is the angle between the $`z`$-axis and the direction to the point considered. Under the influence of $`F`$ and the gravitational acceleration $`g_M`$ on Moon’s surface, its shape will deviate from that of a sphere. The deformation is in first order given by $$H(\gamma )=H_0\left(\frac{1}{3}+\mathrm{cos}(2\gamma )\right),$$ (2) In equilibrium $`H_0`$ becomes $$H_0=\frac{GR_M^2M_E}{2R^3g_M}=6.5\text{m}.$$ (3) In a time dependent situation elastic tensions reduce the deformation. They will fade away gradually so that equilibrium is reached, say, with a relaxation time $`\tau `$. Assuming that the presence of an apparent rotation $`\mathrm{\Omega }`$ results in a deformation which lags behind with a phase difference $`\varphi =\mathrm{\Omega }\tau `$, the force field $`F`$ acting on this deformation will exert an angular moment $`D`$ which tends to stop the rotation $`\mathrm{\Omega }`$. The integration over Moon’s surface yields $$D=\frac{\mathrm{sin}(2\varphi )}{2}\frac{16\pi }{15}\frac{G^2M_E^2R_M^6\rho }{R^6g_M}$$ (4) where $`\rho `$ is the density on Moon’s surface. The apparent rotation $`\mathrm{\Omega }`$ varies in time as $$\frac{d\mathrm{\Omega }}{dt}=\frac{D}{\mathrm{\Xi }}=K\frac{\mathrm{sin}(2\varphi )}{2}$$ (5) For the inertial moment of the Moon $`\mathrm{\Xi }`$ we use the value of a homogeneous sphere, $`\mathrm{\Xi }=\frac{2}{5}M_MR_M^2`$, and set $`M_M=\frac{4\pi }{3}R_M^3\rho `$. Then $`K=\frac{M_E^2GR_M^3}{M_MR^6}=1.010^{16}`$ s<sup>-2</sup>. For $`\varphi =\mathrm{\Omega }\tau 1`$ the solution of this equation is given by $$\mathrm{\Omega }(t)=\mathrm{\Omega }(0)\text{e}^{K\tau t}$$ (6) The true relaxation time of this deformation is open to discussion. If, for example, this time is assumed to be $`\tau =1`$ d = 86 400 s, then the decay constant for $`\mathrm{\Omega }`$ becomes $$\tau _\mathrm{\Omega }=\frac{1}{K\tau }=3500\text{ yr}$$ (7) Assuming $`\mathrm{\Omega }=0.1\omega _{rot}`$, then the phase becomes $`\varphi =0.1\omega _{rot}\tau =0.023\text{ rad}=1.3^{}`$. Thus a rather small phase lag of the tidal deformation is sufficient to synchronise the lunar rotation within the Holocene. This assumed phase lag is smaller than the value of 2.16 inferred from astronomical data regarding the actual lunar bulge . The fact that according to Eq. 6 $`\mathrm{\Omega }(t)`$ never stops, is an artefact of a model with a single decay constant. The real dynamics also involves slow relaxations, which terminate the synchronisation in a finite time. ## Acknowledgement We thank Hans-Ude Nissen for his improvements of the manuscript.
no-problem/9909/hep-ph9909491.html
ar5iv
text
# 1 Introduction ## 1 Introduction Next-to-leading order QCD Monte Carlo programs are used for comparisons of QCD perturbation theory and experimental data. Nowadays four programs are available, which allow user defined observables; these are MEPJET, DISENT, DISASTER++, and JETVIP<sup>1</sup><sup>1</sup>1JETVIP is currently not available in the common NLO library.. With the availability of different programs cross-checks become important. In order to make a comparison, the user code has to be implemented for each program. This procedure is susceptible to bugs and updating of several versions needs a decent revision control system. To eliminate this additional source of error, a common scheme was developed and is presented here. The scheme consists of three independent parts, which are described in the following sections. Section 2 introduces the steering card. This file is read at the beginning of each calculation and sets up the most important parameters. The user has the ability to add additional parameters to steer his own user routines. The interface to the user routines is described in section 3. This code calculates the observables the user is interested in. For most standard procedures, e.g. performing boosts to different frames or calculating the number of jets, special functions are available in a library. These functions are explained in detail in section 5. A more detailed version of this manual including the full description of the function library and an example are available with the source code at www.desy.de/$`\stackrel{~}{}`$heramc/proceedings . ## 2 Steering Card At the beginning of the calculation a steering card is read from standard input to set up the input parameters. This allows the easy modification of the most important starting values even after compilation. While the generator’s adjustable parameters are known, this is not true for the user code. Therefore an easy interface is provided, which allows the user to expand the set of steering card parameters. ### 2.1 Basic Structure of the Steering Card The basic structure of the steering card is equivalent to that of most Monte Carlo generators. The steering card consists of several banks, labeled by a character string of four characters. The number of banks in a steering card is not limited. Each bank consists of an unique four character label, an integer version number and an arbitrary number of entries (including none). An entry consists of an identifier and the value of the field. The value can be of type integer, real or string, where string stands for a character constant of arbitrary length. Comments can be marked by an asterisk (\*) in the first column or by an exclamation mark (!) anywhere in a line. If a comment mark is found the rest of the line is ignored. ### 2.2 Predefined Banks Many parameters, like the particle density function and the number of events, have to be set for all programs, these parameters are collected and can be set by the use of a steering card bank called MOCA. Below you see the complete MOCA bank. Some of the entries are described in detail below. ``` MOCA 0 ! NLO Monte Carlo steering card, Version 0 ’TYPE’ 2 ! NLO program (1=Mepjet, 2=Disent, 3=Disaster++) ’NEVE’ 10000 ! Number of Events (required) ’Q2MI’ 10. ! (D=100.) Q**2 min ’Q2MA’ 100. ! (D=4000.) Q**2 max ’XMIN’ 0. ! (D=0.) x min ’XMAX’ 1. ! (D=1.) x max ’XIMI’ 0. ! (D=0.) xi min ’XIMA’ 1. ! (D=1.) xi max ’YMIN’ 0. ! (D=0.) y min ’YMAX’ 0.7 ! (D=1.) y max ’W2MI’ 0. ! (D=0.) Minimum W**2 ’W2MA’ 100000. ! (D=100000.) Maximum W**2 * PDFL and PDFH give the PDF libs used for LO (L,SET.eq.0) and * NLO (H,SET.ne.0) calculations, should be identical for MRS and * of corresponding type for CTEQ and GRV, * example : PDFL=5005 (GRV94 LO) and PDFH=5006 (GRV94 NLO) for * MSbar scheme and PDFH=5007 (GRV94 NLO) for DIS scheme ’PDFL’ 3035 ! (D=3035=MRSH MSbar) PDF lib, ’PDFH’ 3035 ! NGROUP*1000+NSET as in pdflib manual ’ECMS’ 90200. ! (D=90200.=27.5*820.*4.) CMS energy ’EP ’ 820. ! (D=820) Proton energy ’LEPT’ 1 ! (D=1) incoming lepton (1: e-, 2: e+) * The following values (upto * end) are given in the lab frame ’ELMI’ 11. ! (D=0.) Minimum energy of scattered electron * ’ELMA’ 11. ! (D=1000.) Maximum energy of scattered e- ’TLMI’ 150. ! (D=0.) Minimum angle of scattered electron ’TLMA’ 172.5 ! (D=180.) Maximum angle of scattered e- * end ’NFL ’ 5 ! (D=5) number of flavours ’SFQ2’ 1. ! (D=1.) factorization scale factor for Q**2 ’SRQ2’ 1. ! (D=1.) renormalization scale factor for Q**2 ’SFKT’ 0. ! (D=0.) factorization scale factor for kt**2 ’SRKT’ 0. ! (D=0.) renormalization scale factor for kt**2 ’SFPT’ 0. ! (D=0.) factorization scale factor for pt**2 ’SRPT’ 0. ! (D=0.) renormalization scale factor for pt**2 ’SFCO’ 0. ! (D=0.) factorization scale factor for 1 ’SRCO’ 0. ! (D=0.) renormalization scale factor for 1 ’IAEL’ 0 ! (D=0) switch for alpha_electromagnetic ! 0 : fixed, 1 : running ’AELM’ 0.00729735308 ! (D=0.00729735308) alpha_electromagnetic ’MASS’ 0.15 ! (D=0.15) mass of strange quark in GeV ’MASC’ 1.4 ! (D=1.4) mass of charm quark in GeV ’MASB’ 4.4 ! (D=4.4) mass of bottom quark in GeV ’MAST’ 174 ! (D=174) mass of top quark in GeV ’ALOO’ 2 ! (D=2) number of loops for alpha_s running, ! -1: pdf routine, 0: fixed, 1|2|3: 1|2|3 loop ’LAM4’ 0.3 ! (D=0.3) Lambda_4_MSbar for running alphas ! or alphas for fixed alphas ``` * ITYPE is coded as follows : unknown MEPJET DISENT DISASTER++ * NEVE is the only field required for each run. It gives the number of events to be generated<sup>2</sup><sup>2</sup>2It should be noted for DISASTER++, however, that the product FFIN * POINTS determines the number of events generated. Using the library default values, DISASTER++ will produce 50% more events than DISENT.. * PDFL and PDFH specify the parton density functions used. Two values are given in order to enable the distinction of leading order and next-to-leading order processes. The format of each of the values is NGROUP * 1000 + NSET where NGROUP and NSET are given by the PDFLIB. * The renormalization and factorization scale can be set using the formula $$\mu _s^2=f_{Q2}Q^2+f_{PT}p_t^2+f_{KT}k_t^2+f_{CO}$$ where the factors $`f_{xy}`$ are set with the steering parameter Ssxy. * IAEL and AELM steer the running of $`\alpha _{\text{el.mag.}}`$ and ALOO and LAM4 steer the running of $`\alpha _s`$ accordingly. When ALOO is set to -1, the $`\alpha _s`$ calculation included in the pdf library is used. If it is 0, $`\alpha _s`$LAM4 for every scale, otherwise LAM4 corresponds to $`\mathrm{\Lambda }_{\overline{MS}}^4`$ and the masses MASx are used for calculating $`\mathrm{\Lambda }_{\overline{MS}}^{(3,5,6)}.`$ Some parameters are still program specific, therefore for each generator an additional steering card exists. Below you can find the steering card of DISASTER++ and DISENT. ``` DISE 0 ! DISENT specific steering card * ’SEDL’ -1 ! (12345) Lower seed value for random generator ! if set to negative value a time-depended ! value will be used * ’SEDH’ -1 ! (678900) Higher seed value for random generator ! if set to negative value a process-depended ! value will be used ’SCHE’ 0 ! (D=0) 0 : MSbar, 1 : DIS ! Please note : the PDF has to be chosen equivalent ’NPO1’ 2 ! (D=2) The parameters NPO1 and NPO2 are internal ’NPO2’ 4 ! (D=4) parameters used by Disent, please refer ! to the program manual DISA 0 ! DISASTER++ specific steering card ’BORN’ 2 ! (D=2) number of particles in born term ’PROC’ 0 ! (D=1) order of process (0 : LO, 1 : NLO) ’FPRE’ 1.5 ! (D=1.5) factor for # points in preparation run ’FFIN’ 1.5 ! (D=1.5) factor for # points in final run MEPJ 0 ! MEPJET specific steering card ’BOSO’ 1 ! exchange boson (1:gamma, 2:Z, 3:gammaZ, 4:W) ’BORN’ 2 ! (D=2) number of particles in born term ’PROC’ 0 ! (D=1) order of process (0 : LO, 1 : NLO) ’NPRO’ 100 ! (D=100) process number (quark+gluon, unpolarized) ! see Mepjet manual for details (iproc) ’IMAS’ 0 ! (D=0) massive calculation (0=no, 1=yes) ’IPDF’ 4 ! (D=4) pdf crossing function ’ITER’ 4 ! (D=4) number of iterations for vegas adaptation ’SMIN’ 0.1 ! (D=0.1) smin cone size ’PTIN’ 0.1 ! (D=0.1) minimal p_t in lab frame for partons ’AMIN’ -10. ! (D=-10) minimal pseudo rapidity in lab for partons ’AMAX’ 10. ! (D= 10) maximal pseudo rapidity in lab for partons ’FILE’ ’mepjet’ ! base name for vegas grid files ``` Please note, that due to the usage of crossing functions the parton density functions for Mepjet is not set with PDFL, PDFH, but by the IPDF switch using the crossing functions found in the pdfcross directory. ### 2.3 User Functions for Steering Card Access In order to add an additional parameter to the steering card, two steps are required. First the parameter has to be initialized before the steering card is read using one of the following routines : * call SCARINT(bank,identifier,value) for integer parameters * call SCARREAL(bank,identifier,value) for double precision parameters * call SCARCHAR(bank,identifier,value) for string parameters where bank is the four letter bank name, identifier is the one to four character identifier and the value is the default value of the type corresponding to the type of the parameter. After the steering card has been read, the user can retrieve the current parameter value. To do so, three functions are available : * value = GCARINT(bank,identifier) for integer parameters * value = GCARREAL(bank,identifier) for double precision parameters * value = GCARCHAR(bank,identifier) for string parameters where bank and identifier correspond to the values given above and value will be the one given in the steering card or the default parameter, if the corresponding bank or identifier in the bank was not set in the steering card. Values and banks may appear more than once in a steering card, but only the last value is stored and can be retrieved. In principle the get routines could be called each time one of the values is needed, but since this requires several string comparisons, it is rather slow. The recommended procedure is to get the values once and store them in a FORTRAN common block. ## 3 User Routines In this section, the routines that have to be supplied by the user in order to run the programs are described. Therefore the general scheme is sketched here. The number of user routines is quite large, but most routines can be replaced by empty or short routines if no special action is needed. Figure 1 shows the calling tree of the relevant functions and subroutines. Routines printed in typewriter have to be supplied by the user and are described in detail below, functions in italic and roman are provided by the corresponding cross-section Monte Carlo and the library, respectively. ### 3.1 Initialization and Termination Routines Several initialization and termination routines are called to allow the user to predefine values, to open files, or to book histograms. None of these subroutines take an argument. In the following each routine is described. disinit is called only once at the beginning of the program. This is the place to initialize the users steering card values. After the steering card has been read, init is called. Here the parameter values can be copied to a common block, as proposed at the end of section 2. Also output files should be opened here. Corresponding to these initialization routines a termination routine disterm exists. This function is called once at the end of the run. Here all files should be closed. In addition to these called-once routines an additional set of init and term routines exists. The evtinit subroutine is called before a new event is generated and evtterm afterwards. Since all contributions to the observable of one event have to be added before the error can be calculated, the main summation can only be done in those routines. Please note that the evtterm routine will not be called if an event has to be dropped due to technical reasons (e.g. after a floating point exception). See the web version of this manual for an example. ### 3.2 Main User Routines For each event a number of phase spaces is generated, each of these phase spaces can have several contributions. The main idea is to calculate your observables once for each phase space iphasno and add all weights of the corresponding contributions to the event weight. The calculation of the observables is done in disphase and the summing is done in discontr. The user has to take care that the observables are saved in a common block and are available when discontr is called. It is sufficient to allocate space for up to 30 different phase spaces. In addition to the actual event sampling, some programs perform an adaptation loop to adapt the phase space to the region that is actually used. In this additional loop disphase is called only. The adaptation is then done according to the return value. The syntax is ``` function disphase (iphasno, npar, ps, iadapt) double precision disphase integer iphasno, npar, iadapt double precision ps(4,10) ``` and ``` subroutine discontr (iphasno, ntype, nalp, nalps, n2pi, + ilognf, jmin, jmax, weight ) integer iphasno, ntype, nalp, nalps, n2pi integer ilognf, jmin, jmax double precision weight(-13:11) ``` where the arguments have the following meaning : * iphasno gives the number of the phase space. * npar is the number of outgoing particles (excluding the scattered lepton) * ntype is the type of the contribution unknown tree level subtraction counterterm finite virtual term finite collinear term renormalization term * nalp and nalps are the orders of $`\alpha _{\text{elm}}`$ and $`\alpha _s,`$ respectively. n2pi gives the power of an additional factor of $`{\displaystyle \frac{1}{2\pi }}.`$ * ilognf selects one of the following factors : $`\lambda =1`$ $`\lambda =\mathrm{log}\left({\displaystyle \frac{\mu _r^2}{Q^2}}\right)`$ $`\lambda =N_f\mathrm{log}\left({\displaystyle \frac{\mu _r^2}{Q^2}}\right)`$ $`\lambda =\mathrm{log}\left({\displaystyle \frac{\mu _f^2}{Q^2}}\right)`$ $`\lambda =N_f\mathrm{log}\left({\displaystyle \frac{\mu _f^2}{Q^2}}\right)`$ * ps($`i,j`$) defines the phase space. The array contains several particles distinguished by the second index $`j.`$ incoming proton incoming lepton boson outgoing lepton outgoing proton remnant incoming parton outgoing partons The first index gives the 4-vector components, where $`i=1`$ corresponds to the energy, and $`i=2,3,4`$ to the $`x,y,`$ and $`z`$ components of the momentum, respectively. * iadapt is set to $`1`$ during the adaptation loop. The value returned by disphase is used to adapt the integration and sampling phase space. If a negative is returned, a default adaptation calculation is used. * weight is an array containing several weights. For each contribution only a specific range in this array, defined by jmin and jmax is valid. Each weight has to be multiplied by a factor $`\rho _i`$ and then all weights have to be summed. The factor $`\rho _i`$ for the weight $`i`$ depends on the particle density functions $`f_\alpha `$ for the different flavours $`\alpha `$ and is calculated by : $`f_{\overline{\alpha }}`$ with $`\alpha =u(8),d,s,c,b,t(13).`$ $`f_g`$ $`f_\alpha `$ with $`\alpha =u(6),d,s,c,b,t(1).`$ $`1`$ $`{\displaystyle \underset{\alpha =1}{\overset{N_f}{}}}Q_\alpha ^2f_\alpha `$ $`{\displaystyle \underset{\alpha =1}{\overset{N_f}{}}}Q_\alpha ^2f_{\overline{\alpha }}`$ $`{\displaystyle \underset{\alpha =1}{\overset{N_f}{}}}Q_\alpha ^2f_g`$ $`(1N_f){\displaystyle \underset{\alpha =1}{\overset{N_f}{}}}Q_\alpha ^2f_\alpha `$ $`(1N_f){\displaystyle \underset{\alpha =1}{\overset{N_f}{}}}Q_\alpha ^2f_{\overline{\alpha }}`$ $`{\displaystyle \underset{\alpha =1}{\overset{N_f}{}}}f_\alpha {\displaystyle \underset{\beta =1,\beta \alpha }{\overset{N_f}{}}}Q_\beta ^2`$ $`{\displaystyle \underset{\alpha =1}{\overset{N_f}{}}}f_{\overline{\alpha }}{\displaystyle \underset{\beta =1,\beta \alpha }{\overset{N_f}{}}}Q_\beta ^2`$ $`{\displaystyle \underset{\alpha =1}{\overset{N_f}{}}}f_\alpha Q_\alpha {\displaystyle \underset{\beta =1,\beta \alpha }{\overset{N_f}{}}}Q_\beta `$ $`{\displaystyle \underset{\alpha =1}{\overset{N_f}{}}}f_\alpha Q_{\overline{\alpha }}{\displaystyle \underset{\beta =1,\beta \alpha }{\overset{N_f}{}}}Q_\beta `$ The full weight of one contribution is then $$w_{\text{contribution}}=\lambda \alpha _s^{\text{ialps}}\alpha _{\text{elm}}^{\text{ialp}}\left(\frac{1}{2\pi }\right)^{\text{n2pi}}\underset{i=\text{jmin}}{\overset{\text{jmax}}{}}\mathrm{𝚠𝚎𝚒𝚐𝚑𝚝}(i)\rho _i$$ This somewhat complicated procedure can be performed by a library function double wsum (nord, nalp, nalps, n2pi, ilognf, jmin, jmax, weight, fscale, rscale, alphas). See section 2 of the web version for an example. nord selects the leading or higher order particle density function (see steering card values PDFL and PDFH). fscale and rscale are the input values for the factorization and renormalization scales, respectively. alphas is an output parameter returning the value of $`\alpha _s`$ at this phase space point. Additional information for experienced users : For DISASTER++ all phase spaces for one event are generated at the beginning of the event and the disphase routine is called for each phase space, before the first call to discontr is made. The ntype information is 0 for tree level and -1 otherwise. In DISENT the sequence is different, for each contribution disphase and discontr are called right after each other. The ntype information is fully available. The adaptation loop is performed in DISASTER++, only. ### 3.3 Common Blocks Some common blocks are provided to the disphase and discontr routines. They contain the steering card values for the predefined bank and the event kinematics. Here are the definitions<sup>3</sup><sup>3</sup>3For a detailed discussion see section 2 : ``` INTEGER MOCAVERS, NEV, ITYPE, IALELM, NPDFL, NPDFH INTEGER NFL, ALOOP, LEPTON DOUBLE PRECISION Q2MIN, Q2MAX, XMIN, XMAX, YMIN, YMAX, S DOUBLE PRECISION ELMIN, ELMAX, TLMIN, TLMAX, W2MIN, W2MAX DOUBLE PRECISION SRQ2, SFQ2, SRPT, SFPT, SRKT, SFKT, SRCO DOUBLE PRECISION SFCO, DEFAEM, XIMIN, XIMAX, EP DOUBLE PRECISION LAMBDA3, LAMBDA4, LAMBDA5, LAMBDA6 DOUBLE PRECISION MASSS, MASSC, MASSB, MASST, LEPTON COMMON /STERMOCA/MOCAVERS, NEV, ITYPE, IALELM, DEFAEM, + Q2MIN, Q2MAX, XMIN, XMAX,YMIN,YMAX, S, EP, NPDFL, + NPDFH, ELMIN, ELMAX, TLMIN, TLMAX, W2MIN, W2MAX, + SRQ2, SFQ2, SRPT, SFPT, SRKT, SFKT, SRCO, SFCO, + XIMIN, XIMAX, NFL, ALOOP, + LAMBDA3, LAMBDA4, LAMBDA5, LAMBDA6, + MASSS, MASSC, MASSB, MASST ``` and ``` INTEGER DISEVERS, ISEEDL, ISEEDH, NSCHEME INTEGER NPO1, NPO2 COMMON /STERDISE/DISEVERS, ISEEDL, ISEEDH, NSCHEME, + NPO1, NPO2 INTEGER DISAVERS, IPROC, IBORN DOUBLE PRECISION FPRE, FFIN COMMON /STERDISA/DISAVERS, IPROC, FPRE, FFIN, IBORN INTEGER MEPJVERS, IMBOSO, IMPROC, IMMASS, IMPDF, + IMBORN, IMEPROC, IMITER DOUBLE PRECISION RMSMIN, PTMIN_DEF, YMIN_DEF, YMAX_DEF COMMON /STERMEPJ/MEPJVERS, IMBOSO, + IMBORN, IMEPROC, IMITER, IMPDF, + RMSMIN, IMPROC, IMMASS, + PTMIN_DEF, YMIN_DEF, YMAX_DEF ``` where the values correspond to the steering card parameters. The additional KINE common block defines the event kinematics with some invariants and some energies in the laboratory frame of reference. ``` DOUBLE PRECISION Q2, XB, YB, W2, SUMKT2, SUMPT2, + ESCELE, THSCELE, EE, XI, ALPHA COMMON /KINE/EE, XI, Q2, XB, YB, W2, ESCELE, THSCELE, + ALPHA, SUMKT2, SUMPT2 ``` ## 4 Interface to HzTool HzTool is a package that provides code to compare data plots published by the HERA experiments to Monte Carlo programs. Some of the plots could also be compared to next-to-leading order calculations and an interface from this library to HzTool is presented here. Note that several observables are not applicable in next-to-leading order programs, such as particle or track multiplicities, and others would require to add hadronisation effects to the calculation. Therefore the comparison is a priori limited to a few observables. HzTool reads data from the HEPEVT common block, a standard format for high energy physics Monte Carlo programs. In addition, HzTool expects that every call contains the full information for one event with a corresponding weight. Since different contributions to one event have to use a special error treatment, a decent error calculation without changing all HzTool routines is not possible. The restrictions implied are therefore: * Only observables available to next-to-leading programs allow comparisons. * Be especially aware of cuts that spoil cancelations of divergencies. * Errors calculated by HzTool are not valid. * Differences according to non-perturbative effects can be expected. * Some NLO programs might give unusable results, e.g. dijet rates need the simultaneous calculation of $`𝒪(1)`$ and $`𝒪(\alpha _s)`$ tree level diagrams, which is e.g. not possible in Disaster++. To use the HzTool interface, add the file nlolib/src/hztool/hzhep.f to your source. This routine provides implementations to the standard user routines and therefore replaces the interface specified in section 3 and figure 1 by the one given in figure 2. The routines that have to be provided by the user are hzinit(), hzterm(), and hzuser(). These routines should call the hzxxxxx routines with iflag equal to 1, 3, and 2 respectively. See the HzTool manual for more information. ### 4.1 Note for authors of hzxxxxx routines In order to work with this library, hzxxxxx routines have to fulfil some additional requirements. * Add the common block HZNLO to your routine. * Initialize the variable NLO to 0 in the iflag.eq.1 section. * Add an if clause to every hbook fill or weight summation line in order to check, whether the current event counts for the desired process, e.g. check for total cross sections, whether the process is a $`𝒪(1)`$ born term or a $`𝒪(\alpha _s)`$ correction. This check can easily be done using the variables defined in the HZNLO common block. The HZNLO common block is defined as follows: ``` INTEGER NLO, TOT, DIJET, TRIPJET, QUADJET COMMON /HZNLO/NLO, TOT, DIJET, TRIPJET, QUADJET ``` An example might look like: ``` IF (iflag.eq.1) THEN C... init step NLO=0 ELSE IF (iflag.eq.2) THEN C... fill step C... make some phase space cuts CTH add total x-section IF ((NLO.eq.1).AND.(TOT.ne.1)) GOTO 11112 nall=nall+xw 11112 CONTINUE C... run some jet algorithm and make some cuts on dijet events IF (twojet) THEN IF ((NLO.eq.1).AND.(DIJET.ne.1)) GOTO 11114 nall2=nall2+xw call hfill(121,x,0.,xw) call hfill(131,q2,0.,xw) 11114 CONTINUE ENDIF ENDIF ``` ## 5 Function Library In addition to the unified interface a set of subroutines is defined. Please, see the full list below. * jet algorithms + JADE + $`k_t`$ + longitudinal boost-invariant $`k_t`$ * event shape variables * Lorentz boosts * support routines for calculation of momentum, angles, masses, etc. For the full documentation of most routines see the web version. ### 5.1 Jet Algorithms Several jet algorithms exist. The calling sequence for those is ``` double precision P(4,*), YCUT, SCALE, VEC(4,*) integer NPA, IRECOM, NUM call <algo>jet(P,NPA,YCUT,SCALE,I,IRECOM,NUM,VEC) ``` where \<algo\> is replaced by the name of the algorithm (i.e. one of jade, kt). The common arguments are : * P the input particles, * NPA the number of input particles, * VEC the output jet axes and energies, * NUM the number of jets and * IRECOM gives the recombination scheme for two particles (see the description of the support routine vadd in section 5.4 for more details). The remaining arguments are algorithm dependent. Please refer to the individual manual. The basic idea is, that SCALE corresponds to a scaling factor and YCUT corresponds to a cut value. This is for example the scale for the invariant mass and the maximum mass over scale value for the jade algorithm and the minimum jet $`E_t`$ and the maximal cone radius for the cone algorithm. I is an additional input value with no predefined meaning. For the KTCLUS package, a slightly different calling sequence is used : ``` SUBROUTINE KTINCJET(P,NPA,PTMIN,NUM,V) DOUBLE PRECISION P(4,*),V(4,*),PTMIN INTEGER NUM,NPA SUBROUTINE KTCLUSJET(P,NPA,SCALE,YCUT,NUM,V) DOUBLE PRECISION P(4,*),V(4,*),SCALE,YCUT INTEGER NUM,NPA ``` where the arguments correspond to the arguments explained in the beginning of the section. ### 5.2 Lorentz Boost Two routines are provided to perform Lorentz boosts to other reference frames. ``` SUBROUTINE boost1 (V,BR,BZ,BXZ,IDIR,IERR) DOUBLE PRECISION V(4),BR(4),BZ(4),BXZ(4) INTEGER IDIR,IERR SUBROUTINE boost (P,V,N,BR,BZ,BXZ,IDIR,IERR) DOUBLE PRECISION P(4,*),V(4,*),BR(4),BZ(4),BXZ(4) INTEGER N,IDIR,IERR ``` boost1 boosts one vector, given in array V into the new frame, where V will be used for input and output. boost can boost N vectors given in array P into the new frame. Here the original vectors are conserved and the new vectors are filled in array V. boost should be preferred, when more than one vector is to be boosted, since the calculation of the rotation matrices is done only once for all vectors. The boost is specified by three 4-vectors, which will have the following characteristics in the boosted frame : BR will be the 0-vector, BZ will point in z-direction and BXZ will lie in the x-z-plane. If IDIR is zero a normal boost will be performed, if IDIR is one, the boost is reverted such that boosting the returned particles in V with the given boost vectors (and IDIR=0) would result in the original particles P. If an error occurs during boosting IERR will contain a non-zero value. The routines ``` SUBROUTINE blab2br1 (V,IERR) SUBROUTINE blab2br (P,O,N,IERR) SUBROUTINE bbr2lab1 (V,IERR) SUBROUTINE bbr2lab (P,O,N,IERR) DOUBLE PRECISION P(4,*),O(4,*),V(4) INTEGER N,IERR ``` perform the boosting of one or several vectors from the H1 laboratory frame to the Breit frame and vice versa. ### 5.3 Event Shape Routines A single routine is provided to calculate a variety of event shape variables. ``` SUBROUTINE getevtshape (NPAR,PS,SCALE,SHAPE,SUME) DOUBLE PRECISION PS(4,10),SCALE,SHAPE(NOVAR),SUME INTEGER NPAR, NOVAR PARAMETER (NOVAR=20) ! the current number of variables ``` where the input parameters are * NPAR the number of input particles, * PS(4,11) the four vectors PS() from disphase, * SCALE the scale to normalise to for some variables, usually $`Q`$. the subroutine outputs * SUME the total energy in the current region of the Breit frame, * SHAPE(NOVAR) and array with all the event shapes. They are indexed by a letter code eg. SHAPE(itz) for current jet thrust. The table below gives the indexes for all 14 of the available event shapes. The remaining 6 variables in SHAPE(20) are internal or obsolete and should not be used. The index variables are available by including the file index.inc. The routine is designed to be called from within disphase and passed the common library four-vectors ps() directly. No cut on the total energy in the current region (to ensure infrared safety) is performed within the routine; this quantity is returned and any cut is the responsibility of the user routine. ### 5.4 Support Routines A collection of small functions are available. Most of the routines come in two versions, one that takes a vector and one that takes an array of vectors and an index as arguments. * vcopy (A,I,B,J) copies the i’th vector of array A to the j’th vector of array B. * vadd(A,I,B,J,IRECOM) adds the j’th vector in array B to the i’th vector in array A. The available recombination schemes are : E/JADE : $`𝐩:=𝐩_𝐚+𝐩_𝐛.`$ E0 : $`E:=E_a+E_b,\stackrel{}{p}:=\frac{|\stackrel{}{p}|}{E}(\stackrel{}{p}_a+\stackrel{}{p}_b)`$ (momentum rescaling) P : $`\stackrel{}{p}:=(\stackrel{}{p}_a+\stackrel{}{p}_b),E=|\stackrel{}{p}|`$ (energy rescaling) Snowmass : $`p_t=p_{t,a}+p_{t,b},\eta =\frac{1}{p_t}(\eta _ap_{t,a}+\eta _bp_{t,b}),\phi =\frac{1}{p_t}(\phi _ap_{t,a}+\phi _bp_{t,b}),E=|\stackrel{}{p}|`$ * P = ATH (A,I) calculates the theta angle (in radian) of the i’th vector of array A wrt. the $`+z`$-axis. * P = VP2 (V) and P = AP2(A,I) returns the momentum squared of the particle V and A(i), respectively. * P = VMASS2 (V) and P = AMASS2(A,I) returns the invariant mass squared of the particle V and A(i), respectively. In the above functions and subroutines the variables are defined as follows : ``` DOUBLE PRECISION A(4,*), B(4,*), V(4), P INTEGER I,J,IRECOM ``` ## 6 Summary In this paper we have presented a scheme for a complete library of next-to-leading order QCD programs. This library consists of three parts; a steering card mechanism, a unified interface for the user routines and an expandable set of FORTRAN library functions. ## Acknowledgments We would like to thank the authors for their support and appreciate the comments and suggestions on the program and this paper. We also thank the organizers and conveners of this workshop for the interesting topics.
no-problem/9909/astro-ph9909108.html
ar5iv
text
# 1 Infrared imagers ## 1 Infrared imagers Deep multicolour imaging of large fields is one of the most important and popular tools for studying a large variety of astrophysical objects. This simple recognition, together with the recent availability of very large format CCD detectors, has prompted many groups to develop wide field optical cameras for large telescopes. An excellent analysis of the status and performances of these instruments can be found in the WFI–LBT report (Giallongo et al. 1999) which also contains an exhaustive discussion of the scientific cases for deep wide field imaging. While the astronomical community is taking (or will soon take) advantage from several powerful wide field optical imagers, the situation for imaging in the near infrared (1–2.5 $`\mu `$m) is much less encouraging. Table 1 is a list of all the NIR instruments working, or planned for the next $``$5 years on large telescopes. In spite of the fact that the latest generation 1024<sup>2</sup> (and soon 2048<sup>2</sup>) IR arrays allow coverage of about 10’$`\times `$10’ fields at seeing–limited resolutions, the average field covered by the NIR instruments is much lower. In particular, the situation on the largest telescopes is far from encouraging. On the upgraded MMT, a 5’$`\times `$5’ camera was proposed by the Cambridge’s group in 1996 but, to the best of my knowledge, this project has been cancelled. A similar instrument is now proposed by the CfA but its status is still very unclear (see ref. T18). On 8–10m class telescopes no wide field imager is officially planned apart from NIRMOS, which is indeed a multi–object spectrometer and will be seldom used as an imager. The main consequence of such a situation is that all deep imaging surveys will be severely biased toward sources which are blue enough to be detected by CCDs, while miss intrinsically red objects such as very cool brown dwarfs, elliptical galaxies at $`z>1.5`$ and QSOs at $`z>10`$ (just to mention a few of the “hottest” subjects). This problem could be alleviated if an instrument like WIDE becomes operative on either the LBT and/or the TNG telescopes. In both cases, the “survey power” would be a factor $``$10 larger than any other instrument available or planned (see Table 1). I report here the main results of the phase–A study of the WIDE instrument. ## 2 The WIDE instrument The main goal of the WIDE project is to build a simple and relatively inexpensive NIR instrument which could cover the largest possible field of view for seeing limited imaging on 3.5m and 8m class telescopes. The prime focus of the LBT, with a natural scale of 21”/mm (i.e. 0.38”/pix on a Rockwell HgCdTe array), is the ideal site for such an instrument. Simple considerations on the relative roles of airglow and thermal backgrounds, together with the recent experience of the $`\mathrm{\Omega }`$–prime instrument at Calar Alto, indicate that the prime focus is an excellent station to perform IR imaging in the airglow dominated bands, i.e. from 1 $`\mu `$m to K’. Moreover, a prime focus camera is much simpler and consists of much fewer optical elements than Cassegrain instruments with a similar field of view. More details on the expected performances can be found in the original WIDE proposal that was submitted to the CNAA in May 1998 (see ref. T24) which also includes a quite detailed analysis of the various technological aspects of this instrument. In the last year we concentrated on the opto–mechanical design and verified the feasibility (and estimated the cost) of the various parts by contacting various companies. Another excellent possibility is to exploit the prime focus of the TNG telescope, in which case it is necessary to use a mosaic of 4x4 arrays to cover an area large enough to achieve a survey power similar to the LBT. Figs. 1,2 show the optical layouts of the instruments. The larger lenses (max $``$ 400 mm) are manufactured out of standard fused silica (IR grade) or glasses from the Ohara Corp. (SFPL51 and SPFL52) or Schott (FK54) catalogues. All these glasses have negligible internal absorptions at $`\lambda <2.4`$ $`\mu `$m. The smaller lenses are made of calcium or barium fluoride crystals which are regularly produced in large blanks by several companies around the world. The sizes of these lenses is quite standard. In particular, the BaF<sub>2</sub> lens is slightly smaller than the collimator of ISAAC while the CaF<sub>2</sub> elements are all significantly smaller than the lenses normally used in UV micro–lithography instruments. The only non–spherical element is the first CaF<sub>2</sub> lens which has a conical surface: K=–0.22 and K=–0.36 for LBT and TNG, respectively. The sizes and sphapes are within the capabilities of companies specialized in single point diamond machining (e.g. Janos Technology). However, it should be noted that the aspheric on the TNG design, with a maximum deviation from sphere of 290 $`\mu `$m, is much less demanding than that for LBT which deviates up to almost 800 $`\mu `$m. The system for LBT is virtually free from chromatism and the image quality is excellent, i.e. $`>`$80% of the light within one pixel, over most of 12’$`\times `$12’ field of view covered by a single 2048<sup>2</sup> array (see Fig. 1). The image distortion is also quite good: 0.5% at the field edge (6’ from axis) and 1.0% at the corners (8.5’ from axis). The design for TNG, which employs the much more dispersive (but cheaper) infrasil glass, requires refocussing in the various bands and provides excellent images (see Fig. 2) over a spectacularly large field of view, namely 35’$`\times `$35’, with an image distortion of only 0.5% and 1% at the field edges and corners, respectively. This is sufficient to accomodate a mosaic of four non–buttable 2024<sup>2</sup> array each of them covering an area of 13’$`\times `$13’. A specific advantage of both designs is that the positioning of the first lens has very lax tolerances: a decenter of 0.5 mm and/or a tilt of 0.1 degrees can be fully compensated by shifting/tilting the whole dewar and, in practice, produce a negligible effect on the image quality. Therefore, the first lens can also act as the dewar window without requiring any special mechanical mount. The deformations induced by the pressure difference between the outside environment (air) and the inner vacuum amount to several microns (cf. Fig. 3) but have a totally negligible effect on the image quality. The cost estimate is summarized in Table 2 which also includes the names of the companies which we already contacted for the various items. Note that the overall cost of the instrument is dominated by the price 2048<sup>2</sup> Rockwell array(s), expecially in the case of the instrument for the TNG which requires four such devices. ## 3 Low dispersion spectroscopy: the AMICI device Low dispersion IR spectroscopy covering the widest possible wavelength range is a fundamental tool for studying very faint objects with broad spectral features. These include: – elliptical galaxies at $`z>1.5`$ which can be recognized by the 4000 Å break characteristic of relatively old stellar populations (e.g. Soifer et al. 1999) – methane dwarf stars, i.e. brown dwarfs cooler than 1500 K and whose spectrum is characterized by the prominent CH<sub>4</sub> band–head at 1.6 $`\mu `$m as well as by the very broad H<sub>2</sub>O bands which extend into the J, H and K bands (e.g. D’antona et al. 1999). All the existing/planned IR spectrometers for large telescopes employ gratings and/or grisms as light dispersers. This choice intrinsically limits the spectral coverage to 1 or at most 2 photometric bands (e.g. J+H or H+K) per frame. The average efficiency of the grating/grism over the spectral free range is $`<`$50% including the losses introduced by the order sorter filter. The alternative approach which we adopted in NICS, the IR instrument for the TNG, is to use a prism–based disperser which is sketched in Fig. 4. The Crown–Flint–Crown symmetrical combination corresponds to the classical Amici mount with, however, separated (not glued) elements. The resolving power is RS$``$50 and the average efficiency we achieved with ad–hoc multi–layer A/R coatings exceeds 80% (with peaks $`>`$90%) over the full 0.85–2.45 $`\mu `$m range (See Fig. 4). To estimate the “speed” of this device it is convenient to compare the NICS–AMICI combination with the ISAAC–LR spectroscopic mode. The latter uses a grating disperser with average efficiency (within each band) of 50% and requires 4 different exposures to cover the 0.9–2.5 $`\mu `$m range. The AMICI disperser has an efficiency a factor 1.8 higher and delivers the full spectrum in a single shot. Therefore, the factor of $``$7 gain in time one has using AMICI on the TNG should fully compensate the factor of 5 loss due to the lower area of the TNG relative to the VLT. In other words, AMICI on the TNG should soon produce low resolution spectra with similar quality, and with similar integration times, as ISAAC on the VLT.
no-problem/9909/astro-ph9909340.html
ar5iv
text
# An optical solution of Olbers’ paradox ## 1 Introduction This historical paradox highlighted by Olbers almost two centuries ago (Olbers,, 1826) holds that the night sky should have been as bright as the sun’s disk because we should encounter rays from a stellar surface in any direction we look. More particularly, the argument holds that the stellar light must cover an increasing area $`r^2`$ as its distance $`r`$ from its source increases, its brightness $`J`$ diminishing as $`J_0/r^2`$, where $`J_0`$ is the stellar surface luminosity. This is the well known inverse-square law for light, and takes only the geometrical spread in 3-dimensional space into account. According to the paradox, as we look at regions of the sky where the stars are very far, they should each appear dimmer, but a given solid angle would cover $`r^2`$ stars, assuming a uniformly populated infinite universe. The luminosity of the sky should therefore be $`(J_0/r^2)\times r^2=J_0`$, meaning that the background sky should be so bright that the stars should be indistinguishable against it. I present two fundamental results, first, that an infinitesimally small attenuation $`\sigma >0`$, barely enough to change the propagation law to the form $`J_0e^{\sigma r}/r^2`$, suffices to solve the paradox, and second, that such an attenuation happens to be inherent in the wave nature of light. The results are unintuitive because almost any kind of attenuation leads to this form, and absorption and scattering by dust have been considered inadequate in the past. Harrison argued (Raychaudhuri,, 1979) that the dust would eventually attain thermal equilibrium with the stars and effectively stop absorbing more energy. Since the radiation too would eventually reach equilibrium, scattering of itself cannot solve the paradox either. The standard model offers three plausible solutions that the Hubble flow causes the light to lose energy well in excess of the inverse-square attenuation, that the universe is too young for thermal equilibrium, and that the universe is as such finite. Wesson has shown that the first would actually contribute less (Wesson et al.,, 1987), so finiteness of the universe, in both age and extent, is currently believed to be the reason for the darkness of the night sky. I shall show that a similar cutoff occurs because of the inherent attenuation, that it necessarily leads to a further mechanism of spectral modification that would make the most distant galaxies appear primeval, which is again known and currently attributed to the big bang. In both cases, I exploit the *smallness* of the mechanisms needed for the respective effects. The approach is only possible because of the exponential factor in the attenuated propagation law, but more importantly, it demonstrates that the small orders routinely left out in the approximations applicable on earth could be significant physics on the cosmological scale. To further emphasise the error in letting approximations dictate our reasoning, I shall show that an even smaller order of attenuation due to the same mechanism would explain the missing solar neutrino flux, currently attributed to neutrino oscillation, and could mean new insight of a fundamental kind in the physics of weak interactions. Unlike the dust effects, the attenuation of present concern depends only on the presence and not the thermal state or other properties of matter, and is therefore immune to thermal equilibrium. It results from a diffuse entrapment of radiation due to successive diffraction and gravitational deflections that continually turn a portion of the wavefront. While the occurrence of successive deflections is known, for example, in Laue diffraction theory, its implications to astrophysics have not been examined in previous treatments, such as in the context of extinction by dust (Spitzer Jr,, 1978, p149-153), possibly because one ordinarily thinks of diffraction as carrying wave energy around obstructions, increasing rather than diminishing the net power flow. This would be the case in a hypothetical universe where the sources are assumed to be behind a plane of diffracting obstructions, but the notion does not really extend to the three-dimensional universe involved in the paradox, where the stars themselves are the obvious obstructions to each other’s light. The enhancing property turns out to be intransitive because the successive deflections can then keep a fraction of the radiation from ever reaching its original destination. Another reason why this result was unobvious is that Fraunhöfer’s approximation is invariably assumed because of the immense distances involved. A solution to Olbers’ paradox then appears to be ruled out because the loss of the direct rays from a star due to *en route* diffraction by an angle say $`\theta `$, would be made up by the rays from another star behind the diffractor at an angle $`\theta `$. While the argument is somewhat weaker than enhancement, it reveals the error in our past intuition, because the probability $`p(r_s)`$ of finding a compensating star *at the same or less distance $`r_s`$* behind the diffractor depends on $`r_s^2/r^2`$, which is certainly less than unity. The approximation assumes that both the sources and the observer are infinitely far from the diffracting object, i.e. $`r_s\mathrm{}\text{ and }r\mathrm{}`$, in which case the ratio does not matter. These conditions are, however, applicable only in the vicinity of a given diffractor, and cannot be legitimately applied when the obstructing objects are distributed over the same scale of distances as the sources. In the presence of an attenuation $`\sigma `$, the distance $`r_s`$ matters because the compensating source becomes more likely to be farther by the triangle theorem, and therefore likely to be dimmer. In both the real universe and Olbers’ scenario, therefore, the light from a distance source located on a geometrically unobstructed straight line from us does get diminished by diffraction due to obstructions lying off the straight line. This loss would be compensated, as in the Fraunhöfer case, if enough diffracted light from elsewhere could rejoin the straight line path. Traditional wisdom suggests that the repeated deflections be treated as as a random walk leading to a slow diffusion of the photons, which would not yield a net reduction of the average luminosity. However, there are problems with this view, because while it is reasonable to assume that the diffracting objects are randomly distributed, their gross motions are not random and are quite slow in relation to the interstellar distances. What we have is an essentially static pattern lacking the temporal randomisation of direction needed to qualify it as random walk. More particularly, all efforts to simulate thermalisation with fixed dynamical models have consistently led to persistent oscillatory states, since the very first attempt in 1953 by Fermi, Pasta and Ulam (Fermi et al.,, 1965; Fillipov et al.,, 1998), showing that mere complexity of dynamical structure is not enough for assuming diffusion. Moreover, any static pattern necessarily contains circulations, which might not only explain the FPU problem, but in our case, trap some of the light virtually forever. Harrison’s argument cannot be applied to such states because the circulations would be centrally dependent on the individual sources, and each of which presumably has a finite lifetime even in Olbers’ scenario. We do expect most of the “trapped” energy to eventually diffuse out, but we have no basis to assume that *all* of it will. Rather, we can expect a portion of the energy to get absorbed or turn into matter, given that the attenuated propagation law is already characteristic of the Klein-Gordon equation $`(^2+\sigma ^2^2/t^2)\psi =0`$. The latter is simply the quantum version of the relativistic argument that radiation when retarded to effective speeds less than $`c`$ should exhibit rest mass, and relates to our treatment of the solar neutrino problem. We can thus be certain of a net attenuation $`\sigma >0`$, and this suffices, as shown next, to solve the paradox. ## 2 Solution of the paradox As stated, our key argument is that any attenuation whatsoever, so long as it operates on the large scale, solves the paradox independently of the standard model. We seek an attenuation $`\sigma `$ (dB m<sup>-1</sup>), such that the propagation law for light changes from $`r^2`$ to $`r^2e^{\sigma r}`$, which reduces the background brightness of the sky to $$J_\sigma =_{\mathrm{}}^0J_0e^{\sigma r}𝑑r=J_0/\sigma .$$ (1) While the integral is well known, the solution is not immediately obvious, largely because the value superficially resembles $`J_0`$, making it look as if we need a very large $`\sigma `$, of the order of at least $`130`$ dB $`10^{13}`$ (Roach and Gordon,, 1973, p.24-25) to get a dark background sky, and hitherto seemed impossible without the big bang theory. The resemblence is misleading because the unattenuated $`J`$ differs in dimensions from $`J_0/\sigma `$, as the latter has the dimensions of *luminosity $`\times `$ distance*. For legitimate comparison, we must express $`J`$ in exactly same dimensions, hence in the statement of the paradox, the “Olbers luminosity” $`J`$, which one may informally think of as the brightness of the sun’s disk, corresponds not to $`\sigma =1`$ but to $`\sigma =0`$, i.e. $$J\underset{\sigma 0}{lim}_{\mathrm{}}^0J_0e^{\sigma r}𝑑r=\underset{\sigma 0}{lim}J_0/\sigma ,$$ (2) which is infinity. Conversely, when we calibrate with respect to the sun’s disk, the attenuated background sky should be infinitesimally dim, for any $`\sigma >0`$. To appreciate why this should be so, consider how bright the background needs to be in order to match the sun’s disk. In Olbers’ argument, whichever direction we look in, our line of sight must meet a stellar surface and at any finite angular resolution $`\theta `$, the brightness should correspond to the number of stars included within the solid angle $`\theta `$. Accordingly, the observed brightness would be $`J_\sigma J_0e^{\sigma r}`$ along that direction, and *that set of stars then needs to be $`e^{\sigma r}`$ times brighter than the sun in order to match $`J_0`$*. Clearly, it does not matter if $`\sigma `$ is very small; as long as it is nonzero, we have an effective cutoff of the observable universe at $$r_n=\sigma ^1\mathrm{log}(J_0/J_n)$$ (3) for a finite $`\sigma `$, where $`J_n`$ represents the background noise in the measuring process. By eq. (3), the standard model cutoff, corresponding of an age of the universe of $`15`$ Gy, is equivalent to $$\sigma 130\text{ dB/}15\text{ Gy}=9\times 10^{25}\text{ dB m}\text{-1}.$$ (4) This is far less than what one might naively expect from the form of $`J_\sigma `$ (eq. 1), and amounts to a mere $`8\times 10^9`$ dB per light-year. As remarked in the introduction, it is the *smallness* of the attenuation needed to explain the apparent big bang cutoff that makes it impossible to rule it out on the basis of terrestrial physics. Furthermore, eq. (4) is as yet an upper bound, because we ignored the diffraction from nonluminous bodies as well as the gravitation of these and the visible objects, which contribute substantially to the deflections. We also ignored the dust extinction in our own neighbourhood (Roach and Gordon,, 1973, ch.4), to which Harrison’s argument again does not apply, and the impact of the Hubble flow, which together account for a good part of the $`130`$ dB. Note that Harrison’s argument remains valid for dust on the large scale, and the subtlety that overcomes it for the diffractive scattering, described in the next section, is the coherence and dependence of the diffuse circulatory states on their respective sources. ## 3 Scattering approximation Eq. (2) establishes our principal point that only an extremely small attenuation is needed to reproduce the big bang cutoff. It remains to be shown that such an attenuation is indeed possible and likely from successive diffraction. As explained in §1, we are concerned with multiple interstellar hops that are each much larger than stellar diameters, so that the Fraunhöfer theory can be applied to the individual diffractions. We may more particularly treat the stars as point objects, representing the diffraction by an angular spreading function $`f(\varphi ,\delta )`$, $`f0\{\varphi ,\theta \}`$, applicable to a parallel beam of incident light, giving $$J(\varphi ,\delta )=J_0f(\varphi ,\delta )\text{ and }_{\varphi =0}^{2\pi }_{\delta =0}^\pi f(\varphi ,\delta )𝑑\varphi 𝑑\delta =1,$$ (5) for the diffracted light, $`\varphi `$ being the azimuthal angle around the beam axis and $`\delta `$, the angular spread from the axis. We shall now examine the special treatment necessary to account for successive diffractions. The spread function $`f`$ is rather like the differential scattering function $`\sigma (\mathrm{\Omega })`$ of the Rutherford model (Goldstein,, 1980, §3-20), but several important differences must be noted. Firstly, we are concerned with continuous wavefunctions, not particles, so that the “scattering” itself is not at all probabilistic; the only probability inherent in the model is the stellar distribution. Secondly, the scattering function $`f`$ depends on the $`\lambda /D`$ ratio, where $`\lambda `$ is of course the wavelength and $`D`$, the mean stellar diameter. This is treated in more detail in terms of Fresnel-Kirchhoff theory in Appendix A. More importantly, the result of interest is the net attenuation and not a cross-section of interaction. It is common to use $`\sigma `$ for both, but the second notion is not of concern here. We shall now formalise the cumulative forward loss and the nonzero “backscatter” occurring at each encounter. The second of eqs. (5) represents the conservation of energy at each diffractive encounter, since it means that $$_{\varphi =0}^{2\pi }_{\delta =0}^\pi J(\varphi ,\theta )𝑑\varphi 𝑑\delta =J_0.$$ (6) We start by applying $`f`$ to a bundle of rays incident on a first star. Since the rays spread as if from a point source, they acquire a $`1/4\pi r^2`$ spreading loss, even if we had started with a parallel bundle, which would be equivalent to a planar wavefront. A second encounter after a distance $`r_1`$ therefore yields $$\begin{array}{cc}\hfill J(\varphi ,\theta )& =\frac{J_0}{4\pi r_1^2}_{\delta =0}^\theta f(\varphi ,\theta \delta )f(\varphi ,\delta )𝑑\delta \hfill \\ & \frac{J_0}{r_1^2}g_1(\varphi ,\theta )\hfill \end{array}$$ (7) where $`g_1`$ denotes the first cumulative integral $$g_1(\varphi ,\theta )=\frac{1}{4\pi }_{\delta _1=0}^\theta f(\varphi ,\theta \delta _1)f(\varphi ,\delta _1)𝑑\delta _1.$$ (8) We thus obtain the recursive set of integrals $$\begin{array}{cc}\hfill g_0& f\text{ and }\hfill \\ \hfill g_n(\varphi ,\theta )& =\frac{1}{4\pi }_0^\theta f(\varphi ,\theta \delta )g_{n1}(\varphi ,\delta )𝑑\delta \text{ for }n=1,2,\mathrm{}\hfill \end{array}$$ (9) describing $`n`$ successive encounters as $$J(\varphi ,\theta )=J_0g_n(\varphi ,\delta )\underset{j=1}{\overset{n}{}}\frac{1}{r_j^2}J_0g_n(\varphi ,\delta )\overline{r}^{2n}$$ (10) where $`r_j`$ are intervening distances, and $`\overline{r}`$, the mean distance between stars. Integrating eq. (10) over $`\varphi `$, and summing over the contributions from one or more encounters, we obtain the total “gain” at $`\theta `$ from the initial beam as $$\begin{array}{cc}\hfill g(\theta )& =\frac{g_0(\theta )}{\overline{r}^2}+\frac{n_1g_1(\theta )}{\overline{r}^4}+\frac{n_2g_2(\theta )}{\overline{r}^6}+\mathrm{}\hfill \\ \hfill \text{ where }g_n(\theta )& _0^{2\pi }g_n(\varphi ,\theta )𝑑\varphi .\hfill \end{array}$$ (11) Here, $`n_j>1`$ denote the mean number of parallel paths corresponding to the range of $`\varphi `$ for $`j`$ encounters, and $`g_j(\theta )`$ are the per-path contributions. It should be at least intuitively clear that $`n_j`$ would be an increasing combinatorial function of $`j`$, offsetting the increasing geometrical attenuation from the denominator ($`\overline{r}^{2j}`$). Eq. (11) reveals an interesting property of the successive diffractions: that the $`g_j(\theta )`$ increase with $`j`$ for precisely the reason that non-paraxial angles are ordinarily ignored in terrestrial optics – at large $`j`$’s, small incremental angles $`\delta `$ become significant in the integrand, so that $$\underset{j\mathrm{}}{lim}g_j(\theta )\underset{\delta 0}{lim}\left[\frac{1}{2\pi }f(\varphi ,\delta )𝑑\varphi \right]^{\theta /\delta }1$$ (12) as the integral converges to the zero-th order beam. Eq. (12) represents the observation that a succession of small diffraction angles adds up to a large total deflection with significant amplitude, since the component deflections are paraxial. Three encounters each of $`\pi /3`$, for example, suffice to make a back contribution, and the direct backscatter $`g_0(\theta \pi )`$ suffers no $`1/\overline{r}^2`$ attenuation as it involves no intermediate hops at all. As a result, nonzero total reflection is generally guaranteed and the increasing $`n_j`$ also partly compensate for the $`1/\overline{r}^{2j}`$ loss. $`g_0(\pi )`$ is generally ignored in terrestrial optics because the $`\mathrm{cos}^2(\theta )`$ factor is more pronounced over small distances; this too is inappropriate over the cosmological scale of distances. As stated in §1, motions of the surface of the diffracting star, although many orders larger than the wavelength of light, is not significant in the present context, principally because $`f`$ represents angular distribution of the radiant power and is unaffected by phase fluctuations. The presence of a net backscatter $`g(\pi )`$ means that light from sources in the observer’s own neighbourhood will tend to lighten the sky. This is an expected effect of scattering due to dust, for example: Roach and Gordon estimate that for an observer near the centre of our galaxy, the night sky should be relatively opaque (Roach and Gordon,, 1973). This does not contradict either the paradox or Harrison’s argument, but the diffractive backscatter potentially does because it would be spread over cosmological distances. Our result remains intact because the backscattered light would itself be again subject to repeated diffractions and suffer the same attenuation. The totality of “local” sources we need to consider would cumulatively increase as $`r^3`$, but gets overcome by the exponential attenuation $`e^{\sigma r}`$ within a limited $`r`$. ## 4 Applications As mentioned in §1, our mechanism has several interesting applications, beginning with the manifestation of rest mass of the photons because of the Klein-Gordon form of the attenuated propagation law. This suggests that our trapped circulatory photon states could be responsible for a portion of the dark matter especially if the universe were very old. A related observation is that the density of circulatory states would be proportional to the density of particulate matter, because the deflections would become available at smaller distances in dense neighbourhoods. We would expect the attenuation to be greater within galaxies than in the sparser regions between them, and the resulting “optical dark matter” distribution to be consistent with the overall galactic dark matter indicated by their rotation profiles, as the circulatory states would presumably add up with radial distance from the galactic centre. At present, we do not know how to estimate this optical component of dark matter. The attenuation might also explain the intensity of gamma ray bursts from galactic regions. For a given aperture between the stars and other objects surrounding the line of sight, the basic principles of diffraction dictate that gamma rays would suffer far less diffractive loss than visible light because of their shorter wavelengths. The observed gamma ray burst intensities should therefore be much more than we might expect from the visible brightness. While the reasoning appears to be in the right direction, it remains to be substantiated. Another high energy scenario consistent with our theory is the known attenuation of the solar neutrino flux, which, as mentioned in §1, is currently attributed to quantum oscillation. Given that the mass of the sun is $`M_{}2\times 10^{30}`$ kg and the average mass of a nucleon, $`1.67\times 10^{27}`$ kg, we estimate there are $`1.2\times 10^{57}`$ nucleons within the sun’s volume $`1.4\times 10^{27}`$ m<sup>3</sup>, which allows roughly $`1.2\times 10^{30}`$ m<sup>3</sup> per nucleon. This means that a neutrino would encounter a nucleon every $`1.06\times 10^{10}`$ m, or up to $`6.6\times 10^{18}`$ times from the centre to the surface. This mean free path between nucleons is the deBroglie wavelength for a particle of about $`12`$ keV, so some diffraction appears to be inevitable. The observed attenuation, about two-thirds, must result from $`O(10^{18})`$ compoundings, assuming, as an approximation, that all neutrinos originate from near the centre; this yields $`\sigma \mathrm{exp}[10^{18}\mathrm{log}(2/3)]\mathrm{exp}(4\times 10^{19})`$ or $`10^{19}`$ dB per encounter, as the diffractive attenuation needed to explain the observed loss. Once again, it is the *smallness* of the necessary attenuation that makes it plausible as the cause, and we speculate that this could be the wave-theoretic reason for the breaking of symmetry that leads to weak interactions. Our last application is to the appearance of primeval galaxies at high redshift factors $`z`$, and comes from observing that the selective absorption of certain frequencies by matter close to the line of sight should trigger diffractive loss $`\sigma ^{}\sigma `$ at the absorbed frequencies. This should result in a subtractive spectral modification of the light from more distant matter, and the spectral subtraction by our own galactic neighbourhood, which is well evolved and metal-rich, should make more distant galaxies appear metal-deficient and primeval. Only an extremely small $`\sigma ^{}`$ is needed, once again, to explain the almost total metal-deficiency of the primeval galaxies. As $`\sigma ^{}`$ must result from selective absorption by obstructions that must be transparent at other frequencies, i.e. from a different set of obstructions, $`\sigma ^{}`$ cannot be identical to $`\sigma `$. But it would be of the same order, leading to a similar cutoff $`r_n^{}r_n`$ (eq. 3), because denser regions of dust that invariably surround opaque bodies are likely causes of selective absorption. Importantly, we may argue that the most distant galaxies can appear mature *only if there were a uniform mix of young and old galaxies all the way right up to our own galaxy*, but this would contradict the standard model. This does not mean unexpected support for the big bang theory, however, because we could merely be in a relatively small region of the Olbers’ universe, of radius at least $`r_n`$, that formed at or before the big bang indicated by the Hubble redshift. I thank Shai Ronen, Gyan Bhanot, A Joseph Hoane and most of all, Bruce Elmegreen, for valuable discussions in the context. ## Appendix A Point diffraction The usual treatment for diffraction due to a small region concerns apertures in an opaque screen, but our stars are in effect obstructions in an otherwise empty space. If $`U_s`$ represents the diffracted amplitude due to a star, and $`U_a`$, that due to an aperture of the same size, we have by Babinet’s principle that $`U_s=U_0U_a`$, where $`U_0`$ represents the unobstructed incident wavefront. In holography, for instance, we can use this theorem to compute the resulting interference between the direct rays ($`U_0`$) and the diffracted rays ($`U_s`$) from a point obstruction. In the present context, however, we are not interested in the local interference patterns, but only in the direction of the power flow, so the interference between direct and diffracted wavefronts is irrelevant. We do sum over rays going in the same direction $`\theta `$ in eq. (11), but $`\overline{r}\lambda `$, and in any case, there is enormous thermal jitter from our stellar obstructions, which would randomise the phase between the summed paths, hence interference effects can again be ignored, even if our initial rays happened to be quite coherent. We do need to consider phase within narrow bundles of rays, however, in order to obtain the wavelength dependence of the diffraction. Under the far-field assumption $`\overline{r}D`$, $`D`$ being the mean diameter of our stellar obstructions, the incident wavefronts before diffraction would be not only almost planar, but likely to be temporally coherent across a sizeable fraction of the stellar surface. We shall limit ourselves to pure Fourier components, avoiding questions of incoherence, because issues of decoherence and wavepacket dispersal can be considered in terms of these components. With these assumptions, we may apply Kirchhoff’s condition that the conditions in a stellar diffracting region $`𝒜`$ will not be much affected by the surrounding matter, which yields the usual Fresnel-Kirchhoff diffraction formula (Born and Wolf,, 1959, §8.3.2) $$U(\theta )=\frac{i}{2\lambda }_𝒜[\mathrm{cos}(\widehat{r})\mathrm{cos}(\widehat{s})]\frac{e^{ik(r+s)}}{rs}𝑑S,$$ (A1) where $`s`$ and $`r`$ are distances to the source and to the point of observation, respectively. We cannot, however, afford to replace the cosines with a single $`\mathrm{cos}(\theta )`$ factor as in the traditional theory, since this would be proper only for paraxial rays. To correctly compute $`U`$ at large diffraction angles $`\delta `$, we need to retain the two-cosine form in our Fraunhöfer approximation, obtaining $$\begin{array}{cc}\hfill U(\theta )& =\frac{i[\mathrm{cos}(\widehat{r})\mathrm{cos}(\widehat{s})]}{2\lambda rs}_𝒜e^{ik(r+s)}𝑑S\hfill \\ & \frac{i\omega [\mathrm{cos}(\widehat{r})\mathrm{cos}(\widehat{s})]}{2\pi c}\frac{e^{ik(r+s)}}{rs}_𝒜e^{ik(\xi +\eta )}𝑑S,\hfill \end{array}$$ (A2) $`\xi `$ and $`\eta `$ being the direction cosines as in prior theory. A remaining problem is, of course, that the above formulae are meant for diffraction from an aperture, and not an opaque disk presented by each star, but Babinet’s principle assures us that the resulting $`f`$ will be identical. Assuming our stars to be generally circular, we find that the integrand would have the form $`2J_1(ka\theta )/ka\theta `$, where $`k2\pi /\lambda `$ and $`a`$ is the radius of the aperture, yielding $$f(\theta )|U(\theta )\overline{r}|^2=\frac{\omega ^2[\mathrm{cos}(\widehat{r})\mathrm{cos}(\widehat{s})]^2}{(2\pi c)^2}\left[\frac{2J_1(ka\theta )}{ka\theta }\right]^2,$$ (A3) since the $`(rs)^2`$ factor is separately accounted for by $`\overline{r}`$ in §3. As a result, $`f(\theta )`$ is non-zero *almost everywhere*, over the entire range of $`\theta `$, which includes $`\pi `$. This “backscatter” is a reflection from space, due to the local perturbation in the impedance of space caused by the off-axis diffracting star, and is definitely nonzero for any nonzero solid angle around $`\theta =\pi `$. The angular spread is more strongly governed by the phase factor than by $`\omega `$, so that a given gap between nearer obstructions will be more open to gamma rays than to visible light, as remarked in §1.
no-problem/9909/cond-mat9909341.html
ar5iv
text
# Level spacing statistics of disordered finite superlattices spectra and motional narrowing as a random matrix theory effect ## I Introduction In a recent work we discuss a model disordered quantum well made of a short repulsive binary alloy embeded by ordered barriers. Since repulsive binary alloys show a correlation in the disorder, an effective delocalization window could be established in the energy range below the barrier top. Therefore, the intensively studied problem of delocalization in one-dimensional chains with correlated disorder could be addressed from the point of view of quantization effects due to spatial confinement in such chains. A further motivation concerns the verifiability of quantum confinement effects in amorphous semiconductor heterostructures, a largely unanswered question. In this context repulsive binary alloy quantum wells lead to a description of these systems including inherently a key feature of many disordered bulk materials: the presence of well defined energy ranges showing either localized or effectively delocalized states. The existence of well defined spatial confinement quantization in these model disordered quantum wells poses a new question concerning the coupling of quantum wells in finite superlattices. Although well defined, the quantum well states show an inhomogeneous broadening and therefore it is not clear how the level repulsion due to quantum wells coupling behave as a function of the localization length. Once the coupling of two disordered quantum wells by means of tunneling through a barrier could be described, the evolution of level repulsion in finite superlattices could show still qualitative different aspects. In the present paper we show that the coupling of two quantum wells can be well described by the usual tunneling mechanism in a picture delivered by appropriate level spacing statistics for an ensemble of double quantum wells. Furthermore, the level spacing statistics evolve to Poisson or Wigner surmise distributions in finite superlattices, depending on the miniband index. For sake of clearness, in the present work we will name as a miniband the level clustering around the average energy of the levels of isolated quantum wells. Hence, level spacing statistics will be applied for levels within a given miniband. Therefore, for a given set of parameters, while one level cluster may show signatures of true minibands , others present from an effectively extended behaviour down to strong localization in the same superlattice. The minibands of effectively extended states have level spacing properties of a diffusive-like regime. We also identify in the present system a motional narrowing effect in difusive-like minibands, which is actually a consequence of random matrix theory (RMT). In what follows we first describe briefly the model Hamiltonian for the disordered quantum wells and finite superlattices, as well as the level spacing statistics approach. Afterwards results for finite disorder superlattices spectra will be shown and discussed based on the nearest neighbour level spacing statistics properties. Finally, the numerical results for the motional narrowing effect will be presented and analysed as a consequence of RMT. ## II Disordered finite superlattice model Hamiltonian The present model Hamiltonian consists of a one-dimensional chain of s-like orbitals, treated in the tight-binding approximation, with nearest-neighbor interactions only, $$H=\underset{n}{}(\epsilon _n|n><n|+V_{n,n+1}|n><n+1|+V_{n+1,n}|n+1><n|),$$ (1) A finite chain segment will emulate a single well, or a finite superlattice, sandwiched by infinite barriers (isolated structure). The actual chain is constructed in a way that short disordered segments representing the quantum wells are separated by sets of few sites for the barriers. The ends of the total chain segment will emulate wider barriers, of same high as the internal ones, in order to minimize surface effects at the quantum wells at the ends of the finite superlattices. The well material is the above mentioned repulsive binary alloy, where the bond between one of the atomic species is inhibited, introducing short range order: in a chain of A and B sites, only A-A and A-B nearest neighbors bonds are allowed. The introduction of this short range order leads to delocalization of states in the disordered chain. The well layer is then characterized by the correlation in disorder and the concentration of B-like sites, which is related to the probability $`P_B`$ of a B-like site to be the next one in generating a particular chain configuration. All results shown here are for $`P_B=0.5`$, corresponding to an effective concentration of $`B`$like sites of $`\rho _B0.3`$. Disorder is straighfowardly introduced by randomly assigning A and B sites, according to the constraints on concentration and bonding mentioned above. Since we are simulating disordered systems, averages over hundreds of configurations for the same parameters are undertaken. The atomic site energies used throughtout this work are $`ϵ_A=0.3`$ eV, $`ϵ_B=0.3`$ eV and the hopping parameter are $`V_{AA}=0.8`$ eV between A-like sites and $`V_{AB}=0.5`$ eV between A-like and B-like sites for the well layer. For the barriers these parameters are $`ϵ_{br}=0.8`$ eV and $`V_{br}=0.4`$ eV. The results shown here are all considering wells of $`L_w=40`$ sites and barriers widths of $`2L_b15`$ sites. The external barriers are characterized by the same barrier parameters given above and are $`L_{ext}=11`$ sites wide. At interfaces we consider geometric averages of the hopping parameters. The number of quantum wells in a finite superlattice is varied in the range $`2N_w19`$, throughout the present work. ## III Level spacing statistics A bona fide quantum well state in a single disordered quantum well still shows an inhomogeneous broadening due to the underlying disorder, related to a finite localization length, even if this length is many times longer than the well width, a condition necessary for the spacial quantization itself. On the other hand, considering two coupled quantum wells, it is important to compare the energy scale of this inhomogeneous broadening to the tunneling spliting of the levels. A level spliting must be determined as an average over several double well like chains with different disorder configurations. However, if the broadening is of the order of (or larger than) this spliting, then the coupling of the quantum wells can not be resolved numerically in an average of the energy spectra. We will see that this is actually the most common situation we found by numerical inspection of the problem. Therefore we must consider a nearest neighbour level spacing statistics of the spectra. Having in mind the double-well system, we refer to the coupling between the quantum wells as a two level problem and the splitting S is given by $$S^2=(H_{11}H_{22})^2+(2H_{12})^2$$ (2) where $`H_{11}`$ and $`H_{22}`$ are the energies of equivalent quantized levels, one in each well, while $`H_{12}`$ is the coupling between both wells. If $`H_{11}H_{22}`$ and $`2H_{12}`$ are independent Gaussian random variables, the distribution of the level spacing (tunneling spliting) is given by a Wigner surmise: $$P(S)=\frac{\pi }{2D^2}S\mathrm{exp}\left\{\frac{\pi }{4}\frac{S^2}{D^2}\right\}$$ (3) where $`D`$ is the average spacing. If $`H_{12}=0`$, i.e, there is no coupling between the states of the two quantum wells, the distribution of the level spacing reduces to the Poisson distribution: $$P(S)=\frac{1}{D}\mathrm{exp}\left\{\frac{S}{D}\right\}$$ (4) The average level spacing, $`D`$, is related to the variance, $`\sigma ^2`$, of the Gaussian random variables by $`D=\sigma \sqrt{\frac{\pi }{2}}`$ . These are two interesting limits given by RMT which establish a useful investigation tool for the coupling between two (or more) disordered quantum wells, relative to the localization length in these systems . It should be noticed that, while $`H_{11}H_{22}`$ is a random variable, since there is a inhomogeneous broadening of the levels; $`2H_{12}`$ is not entirely random for a few quantum wells system. In what follows we will show that numerical results confirm this prediction, but the coupling between wells will be independently randomized in multiple coupled quantum wells and the Wigner surmise will also become an important limit of the general problem. The level spacing statistics for levels within a given miniband is a well defined problem, since the clustering of eigenvalues in minibands delivers sets of equal number of levels of comparable localization lengths and these sets are well separated in energy from each other. In other words, a finite superlattice overcomes the problem of arbitrarily defining energy intervals where to evaluate the level spacing distributions, like in long one-dimensional chains with correlated disorder showing an almost continuous spectra of states with localization lengths that are strongly energy dependent. ## IV Energy spectra and level spacing distributions Previous results on single disordered quantum wells show that quantum confinement effects are already resolved if some average level spacings are greater than the level broadening. However, the formation of superlattice minibands with the successive coupling of disordered quantum wells is related to a different energy scale. In order to build up a miniband of Bloch states, the inhomogeneous level broadening has to be much less than the level repulsion due to the tunneling coupling. For a given tight-binding parameters set this quantum well level spliting can be tuned by changing the barrier width. After several numerical tests we chose the parameters listed in section II. For the most favorable situation found for barrier widths down to $`L_b=2`$ sites, the average level repulsion turned out to be of the order of the inhomogeneous broadening of the levels, with the exception of a peculiar quantum well level, as will be discussed below. In Fig.1 we show the energy spectra as a function of disorder configuration, for a single quantum well, a double quantum well and a finite superlattice of five quantum wells, for $`L_b=2`$. The spectra are shown in the range around the energy of maximum localization length of an infinite repulsive binary alloy with the given parameters, $`\lambda _{max}0.7eV`$ . We can see an internal structure for the level $`n=12`$, that partially survives in the double quantum well and in the finite superlattice, with clear signatures of level spliting due to quantum well coupling. For a finite superlattice this intra-miniband structure is better observed in the level-spacing distibution, as will be seen below. This quantum well level is closely tuned to the $`\lambda _{max}`$ energy for a $`L_w=40`$ sites well. Hence, the level broadening is the lowest one, leading to the resolution of the internal structure of the average level spectrum. This structure is related to the fact that the well is a repulsive binary alloy and the effective well width varies according to the interface sites. Three different configurations at the interface are possible: (a) A-like site at both interfaces, (b) one A-like site at one interface and a B-like at the other; (c) and B-like sites at both interfaces. The localization increases from (a) to (c), proportional to the detuning of the $`\lambda _{max}`$ condition and the consequent increase in the inhomogeneous broadening. The level repulsion for these $`n=12`$ states is partially resolved on average for a double-well. Nevertheless, increasing further the number of wells, the average energy separation of nearest neighbour levels within a miniband decreases and the coupling among quantum wells is not resolved anymore for a finite superlattice, as can already be seen for finite superlattice of five quantum wells. The miniband imediately below, related to the level $`n=11`$, has an inhomogeneous broadening already wider than the internal average structure due to the interface configuration. On the other hand, these states are still bona fide quantum wells states and should repeal each other by quantum well coupling, but such level spliting is not resolved in the average spectrum even in the case of a double-well with very thin barriers, Fig.1. The level splitting for these states may actually be quantified in histograms of individual levels of the $`n=11`$ doublet, Fig. 2. In Fig. 2(a) the histograms generated by the lower and the higher state taken separately in 1000 disorder configurations are depicted. Each level shows a nearly Gaussian inhomogeneous broadening with a half width of the order of the average level spliting given by the energy separation of curve peaks, $`\delta `$. The average density of states in this energy range, Fig. 2(b), on the other hand, is structureless, as mentioned above. Since disordered quantum wells states show a coupling, but with a level spliting of the order or less than the individual level broadening, the density of states is not a useful quantity to be analised and one should look at the nearest neighbour level spacing statistics. Nearest neighbour level spacing probabilities for double disordered quantum wells are shown in Fig.3. We consider the $`n=12`$, Fig. 3(a); and $`n=11`$, Fig. 3(b), doublets. The numerical histograms, obtained taking into account 1000 disorder configurations, are compared to analytical Poisson and Wigner distributions for the numerically obtained average level spacing, $`D`$. Here we also consider different barrier thicknesses, $`L_b=2`$ and $`L_b=5`$ sites wide, as a mechanism that modify the level repulsion. First, for thin barriers (strong coupling), top of Fig. 3(a) and 3(b), we see that the level spacing probabilities present a threshold, $`\mathrm{\Delta }`$, for both doublets. This threshold, i.e., the minimum value for level spacing, is a direct measure for the tunneling coupling between resonant states. This can be confirmed by inspecting that indeed $`ln\mathrm{\Delta }L_b`$. The spacing probability for the $`n=12`$ doublet show a rich structure above the threshold, which is due to the resolved interface related structure, already seen in the energy spectra as a function of disorder configuration, Fig.2. For the $`n=11`$ states the spacing probability distribution is a smoother structureless curve, which is an evidence for an level broadening wider than the level repulsion. This picture changes completely for thicker barriers. By increasing the barrier thickness, the minimum level spliting diminishes exponentially, becoming neglegible compared to the average level spacing. In this limit, already seen for $`L_b=5`$, the nearest neighbour level spacing approaches a Poisson distribution, as can be seen in the bottom panel of Fig. 3(b), which is expected for uncorrelated levels, localized in either one of the wells. In the bottom part of Fig. 3(a), for the $`n=12`$ (more delocalized) states, there are still structures in the level spacing distribution and no definitive answer concerning the correlation of levels can be delivered from comparisons with either Wigner surmise or Poisson distributions. Interesting to notice is the fact that no Wigner surmise like distribution is obtained in the thin barrier limit. Although the almost degenerate levels show a Gaussian random distribution, the coupling between them is not random. The finite threshold shown in the top of Fig. 3(b) actually suggests that the coupling is nearly constant in a first order approximation. Inter-well coupling, however, may be randomized if we increase the system by adding successive quantum wells, generating a finite superlattice. With such a procedure, the level repulsion diminishes linearly and not exponentially and a Wigner like distribution could be expected. This trend can be already observed for a finite superlattice of five quantum wells, keeping the barrier width $`L_b=2`$, Fig. 4. In this figure the level spacing for other minibands of more localized states are also depicted: (a) for the $`n=12`$ miniband; while (b), (c), and (d) are for the $`n=11`$, $`n=10`$, and $`n=9`$ minibands, respectively. Now we see that the miniband related to $`n=12`$, although showing some structure, presents a clear Wigner surmise envelope. For low spacing, however, a threshold is still seen. This threshold is already absent for the other minibands in this five quantum wells finite superlattice. For the $`n=11`$ states related miniband, the numerical simulation of level spacings shows a perfect agreement with the Wigner surmise distribution. This agreement is still reasonable for the $`n=10`$ miniband, Fig. 4(c), while for lower miniband indexes, like in Fig. 4(d), a crossover from Wigner surmise to Poisson distribution is clearly identified. The fact that in a finite superlattice of disordered quantum wells, nearest neighbour levels associated to the same minibands obey a Wigner surmise indicates that these levels are still correlated and extended along the entire system, Fig. 4(b). Besides, we may determine a lower bond for the localization length of these states: as far as the level spacing in a given miniband follows the Wigner surmise, the localization length exceeds the system length, $`\lambda >L`$ . Based on this evidence, further increasing of the system length, by adding more quantum wells, would lead to a transition to localization, i.e., $`\lambda <L`$. The transition to localization with incresing the number of quantum wells can be seen for the level spacing distribution of a fifteen wells long superlattice, Fig. 5: (a) for the $`n=12`$ miniband and (b) for the $`n=11`$ miniband. In this figure we also pay attention to the difference in level spacing distribution for level at the center of a miniband, top of Fig.5, and at the edges, bottom of Fig.5. For $`n=12`$ states at the center of the miniband, top of Fig.5(a), the level spacing distribution still follows a Wigner surmise. A clear transition to a Poisson distribution is seen for the level spacing at the bottom edge of the miniband (bottom of Fig.5(a)). The transition towards localization is already seen for levels at the center of the $`n=11`$ related miniband. Due to the inhomogeneous broadening of the quantized levels there is no miniband formation in the usual sence of ordered systems which could be characterized by a near clean or at least a balistic regime . Nevertheless, minibands in a less strict sense are defined by a set of states which repeal each other and are delocalized through the whole system. These results suggest that finite superlattices of quantum wells with correlated disorder present very interesting properties related to the competition between the level repulsion and the level broadening. Such a competition can be defined for the fact the inhomogeneous broadening is mainly related to the properties of the repulsive binary alloy from which the quantum wells are made of, while the level repulsion is introduced by the coupling of the quantum wells, which can be partially tuned through more or less transparent barriers. From this point of view, the transition from diffusive-like regime to a localized one may be obtained by means of three different mechanisms. The first mechanism is the suppression of level repulsion by increasing barrier widths, ilustrated for the double-well case in Fig.3. By increasing the barrier width, the coupling between the wells is reduced and the states tend to localize in only one of the wells. Here we see that the tunneling range depends on the inhomogeneous broadening of the almost resonant states. For barriers wider than the tunneling range, the states become uncorrelated. In other words, varying the barrier width, introduces a spatial dependence of the density correlation function in analogy to the energy-level correlation function analysis for the Anderson disordered tight-binding model by Sivan and Imry . A second mechanism of couplig suppression is simply related to the localization properties of the bulk chain that constitutes the material disordered quantum wells. Hence, for a given superlattice, there are minibands with different degrees of localization, from clusters of states with true minibands characteristics to completely localized ones, including at least one miniband with a diffusive character. This effect can be followed for the different minibands of a five well superlattice in Fig.4. Probably the most interesting repulsion suppression mechanism is related to increasing superlattice length. Related to this mechanism is the expected behaviour that a difusive-like miniband in a finite superlattice becomes localized, if the number of quantum wells makes the superlattice exceed the localiztion length for this given miniband. The level spacing statistics evolve from a Wigner surmise towards a Poisson distribution because some levels start to become uncorrelated. This behaviour can be identified by comparing Fig.4 with Fig.5. On average, the level repulsion within a miniband decreases with the increase of the number of quantum wells in a superlattice. This follows the textbook picture , within a tight-binding framework, of the formation of continuous bands for infinite crystals by the successive addition of atoms to the system. ## V Motional narrowing Although the suppression of level repulsion with increasing the number of quantum wells can be identified in the level spacing statistics, a more precise signature of this mechanism is related to a motional narrowing effect. The term ”motional narrowing” has been used to designate the reduction of spectral line widths in disordered systems by some averaging process. Such effect occurs in many areas of physics and, in particular, has recently been predicted and observed in semiconductor microcavities . In the present work, the motional narrowing shows up as a diminution of the inhomogeneous broadening of the states with increasing the system length. In Fig. 6 the average histograms of individual levels of the $`n=11`$ miniband for superlattices of different length are shown; $`N_w=5`$, Fig. 6(a); and $`N_w=9`$, Fig. 6(b). It is noticeable the shrinking of the average half width of the states with increasing the number of quantum wells. For a better comparison, we pay special attention to the central miniband state in either case. Actually the narrowing of the broadening is more intense for this state. This narrowing may be qualitatively understood by the fact that the level repulsion inhibits the inhomogeneous broadening of an individual level taken as an average over many disorder configurations, since the levels in a finite superlattice are correlated. In other words, coupling of states shrinks the range for broadening and the central state in a miniband feels the strongest reduction of this broadening range. In a more precise way, this motional narrowing effect is predicted by the RMT, having in mind the expression for the Wigner surmise. The average level repulsion, $`D`$, in eq. (3) is directly related to the half width of the random gaussian distribution of $`H_{ij}`$ in eq.(2), $`D=\sigma \sqrt{\frac{\pi }{2}}`$. A finite disordered superlattice is characterized, in analogy to the ordered counterpart, by the reduction of the average level repulsion with increasing system size. This reduction leads to a diminution of the broadening of the random distribution of the Hamiltonian matrix elements. Hence, the motional narrowing is an effect directly related to and explained by the RMT. ## VI Conclusions The present model for a finite superlattice overcomes the usual difficulties of RMT applied to one dimensional systems. Furthermore, the transition from Wigner surmise to Poisson, by increasing the system length, suggests that some minibands - although defined in a one-dimensional system - show properties of diffusive regimes. Important here is the presence of correlations, as well as the limit of states completely isolated in a single quantum well and the natural localization length selection by the miniband index. However, there is also a fine tuning of the localization length in each miniband, as can be seen already from the differences in level spacing distribution for different level pairs within a miniband, as depicted in Fig.5. Further, a less pronounced motional narrowing for states at the edges of a miniband, compared to central ones, is also an indication of this localization lentgh tuning. A definitive result for the modulation of the localization length within a miniband is given by the participation ratio $$𝒫=\underset{\stackrel{}{r}}{}|\mathrm{\Psi }(\stackrel{}{r})|^4,$$ (5) where $`\mathrm{\Psi }=1`$. In the tight-binding model with $`N`$ atomic sites eq.(5) becomes: $$𝒫=\underset{i=1}{\overset{N}{}}|a_i^\alpha |^4=𝒜_\alpha $$ (6) where $`a_i^\alpha `$ is the $`i`$th site component of $`\alpha `$ eigenstate. The participation ratio, $`P`$, is inversely proportional to the localization length . Besides, the participation ratio delivers an overview of the main results shown in the present paper. In Fig. 7, the average participation ratio for $`N_w=19`$ superlattices are shown. Open circles are for thin barriers, $`L_b=2`$ and full squares for thick barriers, $`L_b=5`$. For both cases, several minibands separated by minigaps are identified. However, only for the thin barrier case, the minibands show a clear behaviour of the centre less localized than the edges. This recalls the mechanism of suppressing level repulsion by increasing barrier width. We also observe the reduction of localization length together with increasing broadening for minibands progressively away in energy from $`n=12`$ miniband, recalling the behaviour of the localization length of a repulsive binary alloy bulk chain. It should be noticed that for energies below -0.85 eV and above -0.5 eV in Fig.7, no minibands are resolved in the energy spectra. Nevertheless a clear modulation of the localization length is still present. This suggests that minibands in disordered superlattices could be still defined by a modulation of the localization lentgh in a continuous spectrum. ## VII Acknowledgements R.R. Rey-González gratefully acknowledges the hospitality of the Instituto de Física Gleb Wataghin at the State University of Campinas, where part of this work was done. This stay was possible with the financial support from the Facultad de Ciencias of the Universidad Nacional de Colombia, together with International Program in the Physical Sciences (Uppsala University) and the Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP). P.A.S. also acknowledges the financial support from FAPESP and the Conselho Nacional de Pesquisa e Desenvolvimento (CNPq). P.A.S. would like to thank E. R. Mucciolo for useful discussions.
no-problem/9909/cond-mat9909174.html
ar5iv
text
# Equilibrial Charge of Grains and Low-temperature Conductivity of Granular Metals. ## Abstract The low-temperature equilibrial state of a system of small metal grains, embedded into insulator, is studied. We find, that the grains may be charged due to the fluctuations of the surface energy of electron gas in grains, rather than quantization of electron states. The higherst-occupied level in a grain fluctuates within the range of order of charging energy below the overall chemical potential. The system, called a gapless Hubbard insulator, has no overall energy gap, while the transfer of an electron on finite distances costs finite energy. The ionization energy is determined mostly by the intragrain Coulomb repulsion, rather than a weak intergrain interaction, responsible for the Coulomb gap. The hopping transport in the system is studied. The hopping energy is determined by the charging energy. At low temperature the transport has gapless character. The hopping conductivity of metal-insulator composite was intensively studied in the past decades. The numerous experiments evidenced typical behavior of conductivity $`\sigma \mathrm{exp}(T_0/T)^{1/2}`$, which needs zero energy gap of the system . It is generally accepted that the energy scale of hopping transport originates from the Coulomb interaction. Two main reasons were put forward, intra- and intergrain e-e repulsion. The first one leads to a Hubbard-like hard gap and constant activation energy of hopping conductivity, the second is responsible for the smooth Coulomb gap. The Coulomb gap requires grains charging at low temperature. At the same time the origin of charging is not clear. In , the electrons redistribution was ascribed to the difference of their work functions. This assumption is reasonable relative to the grains of different metals, but this is not the case in the most experiments. Chui assumed that the fluctuations of level spacing may overweight the charging energy, leading to grains ionization. Another reason of charging was proposed in. The numerical study of showed that the large portion of grains should be charged. The charging was attributed to the large variation of the highest-occupied energy level due to the wide fluctuation of level spacing both in integrable and non-integrable systems. The system of doped semiconductor quantum dots was studied in our work . In this system the dots becomes charged due to fluctuations of local chemical potential, caused by the fluctuations of impurity numbers. This system was evidenced to be a gapless Hubbard insulator. The purpose of the Letter is the study of low-temperature equilibrial states and hopping transport in a system of large metal grains, embedded in insulator. Unlike to other studies, we want account for the electron quantization in the first order, responsible for the grain surface energy. We argue, that grain charging at low-temperature results from variations of surface energy of electron gas in different grains. We evidenced, that in dependence on the ratio of Coulomb and Fermi energy, the mean charge may exceed the electron charge, producing zero-gap Hubbard insulator. Then we shall study the hopping transport in this system. Let us discuss the typical parameters of the problem. The charging of a grain with capacitance $`C`$ by an electron costs the energy $`U_c=e^2/2C`$. This energy is less than the energy of e-e interaction on the mean distance between electrons inside grain $`U_0`$, which in most cases is less, than the Fermi energy of bulk metal $`E_F`$. Hence inside grain the electron gas is ideal, while the charging energy $`U_C`$ determines transport properties. The other important parameter is interlevel distance $`\delta E(\nu V)^1`$, where $`\nu E_F^{1/2}`$ is the density of states of bulk metal, $`V`$ is a grain volume. Since $`CV^{1/3}`$, $`U_c\delta E`$ in a grain, larger than the atomic size. At the same time, the surface energy per electron, found below, is proportional to $`E_FS/VV^{1/3}`$ and remains comparable with $`U_C`$ if $`V\mathrm{}`$. So we shall account for surface energy and neglect the interlevel distance. We shall neglect the collectivization of electron states in different grains also. Let us consider a metal-insulator composite containing the metal grains with small density $`N`$, $`NV1`$, embedded into insulator with the permittivity $`\kappa `$. Within our model, the grains can exchange by electrons. The existence of surfaces leads to the surface energy $`\alpha S`$, proportional to the surface $`S`$, additional with respect to the infinite volume. To find the surface tension $`\alpha `$, consider the $`\mathrm{\Omega }`$-potential of free electron gas in a neutral metal grain. Since $`\alpha `$ does not depend on the shape of grain, one can consider a rectangular box. In the limit of large grain size and zero temperature $`\mathrm{\Omega }={\displaystyle \frac{m^{3/2}(2\mu )^{5/2}}{15\pi ^2\mathrm{}^3}}V+{\displaystyle \frac{m\mu ^2}{8\pi \mathrm{}^2}}S,`$ (1) where $`m`$ is electron mass, $`\mu `$ is the local chemical potential. According to (1), the coefficient of surface tension $`\alpha `$ of ideal electron gas is $`\alpha =\frac{m\mu ^2}{8\pi \mathrm{}^2}`$. The ratio of surface energy to the volume energy is determined by the ratio of Fermi wavelength to the system size. The single-electron spectrum strongly depends on the integrability of the system determined by the grain shape. The $`\mathrm{\Omega }`$-potential (1) does not depend on the shape of grain with fixed volume $`V`$ and surface $`S`$; hence $`\mathrm{\Omega }`$ is not sensitive to the energy spectrum. For neutrality of a grain the number of electrons $`Z_i=\frac{\mathrm{\Omega }_i}{\mu _i}=\frac{(2m\mu _i)^{3/2}}{3\pi ^2\mathrm{}^3}V_i\frac{m\mu _i}{4\pi \mathrm{}^2}S_i`$ should be equal to the number of ions $`z_i=nV_i`$, where $`n`$ is the density of ions, subscript $`i`$ numbers grains. To be so, the chemical potential should depend on the number of a grain by means of ratio $`S_i/V_i`$. The equalibration of electrochemical potentials $`\mu _ie\varphi _i`$ produces the charges of grains $`e\delta Z_i=e(Z_iz_i)`$, the potentials of grains $`\varphi _i`$ and the shift of the overall chemical potential $`\delta \mu `$ relative to $`E_F`$. The condition of electroneutrality $`\delta Z_i=0`$ yields $`\varphi _i`$ $`=`$ $`{\displaystyle \frac{e\delta Z_i}{\kappa C_i}},\delta \mu ={\displaystyle \frac{2\alpha }{3n}}{\displaystyle \frac{C_iS_i/V_i}{C_i}},`$ (2) $`\delta Z_i`$ $`=`$ $`{\displaystyle \frac{2\alpha \kappa }{3ne^2}}{\displaystyle \frac{C_i}{C_i}}(C_iS_i/V_iC_iS_i/V_i),`$ (3) where the anglular brackets denotes the mean value. In the simple case of spherical grains $`C_i=R_i`$. According to (2,3), the typical charge of grain is determined by the ratio of the Fermi energy to the energy of electron interaction in the metal, multiplied by $`\kappa `$, and if $`R_iRR`$, the charge $`\delta Z_i`$ may be as less, so more than unite. If $`\alpha \kappa ne^2`$, so the typical $`\delta Z_i1`$, the neglecting of the charge discreetness is not valid and the grains remain neutral. The transition of an electron from grain $`i`$ to grain $`j`$ needs energy $`e^2/2\kappa (1/R_i+1/R_j)`$. As a result, the system of grains proves to be the Hubbard insulator in which the Fermi-excitations are separated by a finite gap from the ground state. Controversially, if $`\alpha \kappa ne^2`$, the discreetness of charge is unessential in the first approximation. The typical potential of grain has the order of magnitude $`E_F\lambda _F/R`$, and may essentially exceed the temperature. Note, that the level spacing $`\delta E`$ is much less than the typical potential, what proves the used quasiclassical approach. Really, (3), gives fractional charge. This means that equilibrating of local electrochemical potential takes place with the accuracy of the energy of charging of a grain by a single electron $`U_C`$. The electron redistribution proceeds as long as it is energetically advantageous. The highest occupied energy level $`\mu _i`$ is separated from the overall electrochemical potential $`E_F+\delta \mu `$ by the energy, less than $`U_C`$, while the lowest empty level is above $`\mu _i`$ by $`2U_C`$. Since $`|\delta Z_i|1`$, $`E_F+\delta \mu \mu _i`$ are uniformly distributed within an energy interval $`(0,U_C)`$. Such state of the system has zero energy gap as a whole, while the transfer of an electron to a finite distance requires finite energy. Thus, the metal-insulator composite may be considered as a gapless Hubbard insulator for $`\alpha \kappa ne^2`$. Let us consider the conductivity of a composite at low temperature. In the case $`\alpha \kappa ne^2`$ the conductivity is determined by electrons, excited into the upper Hubbard band (and holes in the lower band). Hence the conductivity is purely activating with the activation energy $`min(|\pm U_c\mu |)`$, where $`\mu `$ is in the Hubbard gap. In the case of gapless Hubbard insulator the general features of transport are similar to the hopping transport in the impurity band. From the formal point of view, the problem is described by the Miller-Abrahams model in which the atomic levels should be replaced by the highest occupied levels of corresponding grains $`\mu _i`$, and the transition probability is the elastic tunneling probability. The system of equations for external potentials of grains $`\phi _i`$ and currents between grains $`j_{ij}`$ has the usual form $`j_{ij}=eW_{ij}(\phi _i\phi _j)`$, where $`\mathrm{ln}W_{ij}=\zeta _{ij}=`$ $`2r_{ij}/a+1/2T(|\mu _i\mu _j|+|\mu _i\mu |+|\mu _j\mu |),`$ $`r_{ij}`$ is the intergrain distance, $`a`$ is the localization length (the scale of electron wave function decay outside a grain). An electron visits the sites, where the magnitude $`\zeta _{ij}`$ satisfies the connectivity criterion $`\zeta _{ij}<\zeta _c`$. Due to the continious grain spectrum an electron tunnels from an exited state of one grain to another grain. This process don’t need phonon assistance, as inter-impurity transitions. The process is determined by the density of grains with accessible lowest empty energy levels per unit energy interval $`N/U_C`$, instead of the integral density of states, as hopping transport in the impurity band. At high enough temperature the tunnel factor will limit the tunneling to the distant grains and the hops to the nearest neighbors will prevail. The characteristic energy is determined by some fracture of $`U_c`$, $`\xi _cU_c`$, where $`\xi _c`$ is the percolation threshold. If assume, that the grains are situated in the square or cubic lattice, $`\xi _c^{(2)}=0.59`$ and $`\xi _c^{(3)}=0.31`$, correspondingly. At low temperatures electrons prefer to jump to the distant grains, optimizing the activation factor, resulting in the variable range hopping mechanism of transport. The hops goes to the grains for which the logarithms of tunneling probability and activation have the same order. In analogy with $$\sigma \mathrm{exp}\left(\frac{T_1}{T}\right)^{1/4},T_1U_cN^1a^3.$$ (4) The Formula (4) is valid within the range $`(E_F/Z)^{4/3}T_1^{1/3}TU_c(Na^3)^{1/3}`$. Below the lower limit, the hopping activation energy becomes less than the distance between single-electron levels $`E_F/Z`$ and our assumption about continuous spectrum of states fails. In this limit electrons prefer to jump to the nearest excited levels of the nearest acceptable grains. The typical hopping energy $`\mathrm{\Delta }`$ is determined by the condition that in the volume with radius of the hopping length $`r`$ there is at least one state, belonging to the infinite cluster. The number of accessible states per unite volume in the energy range $`\mathrm{\Delta }`$ is specified by the number of states in one grain $`Z\mathrm{\Delta }/E_F1`$, multiplied by the density of grains, accessible to the electron $`N\mathrm{\Delta }/U_c`$. As a result, we have $$\sigma \mathrm{exp}\left(\frac{T_2}{T}\right)^{2/5},T_2\left(\frac{E_FU_c}{ZNa^3}\right)^{1/2}.$$ (5) The equation (5) obeys the law ”2/5” close to the law ”1/2”, observed in a number of experiments (see, for example, reviews ). Let us emphasize, that unlike the theory of Coulomb gap, caused by a long-range interaction of charges on different grains, our approach takes into account more strong intra-grain charge repulsion. The long-range interaction has the same order of magnitude as considered only near the percolation threshold $`NR^31`$. For Coulomb gap manifestation the temperature should be so low $`T\stackrel{<}{}U_c(NR^3)^{4/9}(Na^3)^{1/3}`$, that the hopping energy to become comparable with interaction of electrons on the neighbor grains. Near the percolation threshold this condition transforms to $`T\stackrel{<}{}U_c(Na^3)^{1/3}`$. In this case the Coulomb gap is essential for transport and leads to the law ”1/2” . In our consideration we neglected the collectivization of states of different grains assuming the charging energy $`U_c`$ greater than the tunnel amplitude. We limit ourselves by the effects of the grain shape to the electron redistribution. The other reasons of redistribution are the fluctuations of contents (which were studied earlier in the case of semiconductor quantum dots ), surface states etc. In conclusion, we evidenced, that in the system of small metal grains they may be charged even in equilibrium at zero temperature. The system may be Hubbard insulator with or without the energy gap depending on whether the surface tension of electron gas is lower or higher than the Coulomb interaction. The conductivity of the gapless insulator at low temperature is determined by the variable range hopping mechanism with the activation energy, caused by the charging energy of grains. Authors are gratitude to B. Shklovskii for discussions. This work was supported by Russian Foundation for Basic Researches (Grant 97-02-18397).
no-problem/9909/nucl-th9909056.html
ar5iv
text
# Spin Alignments in Heavy-ion Resonances 11footnote 1Talk given at CLUSTER ’99, June 14-19, Rab, Croatia. ## 1 Introduction Resonances observed in heavy-ion scattering have offered intriguing subjects in nuclear physics. $`{}_{}{}^{24}\mathrm{Mg}+{}_{}{}^{24}\mathrm{Mg}`$ and $`{}_{}{}^{28}\mathrm{Si}+{}_{}{}^{28}\mathrm{Si}`$ resonances exhibit very narrow decay widths, with prominent peaks correlated among the elastic and inelastic channels, which suggest rather long lived compound nuclear systems. ¿From the viewpoint of di-nuclear molecules, the present authors have studied normal modes around the equilibrium configurations, which are expected to be responsible to the observed resonances. Very recently, $`{}_{}{}^{28}\mathrm{Si}+{}_{}{}^{28}\mathrm{Si}`$ scattering experiment has been done on the resonance at $`E_{\mathrm{CM}}=55.8`$MeV at IReS Strasbourg. Figure 1 shows angular distributions for the elastic and inelastic channels $`2^+`$, $`(2^+,2^+)`$, respectively. The oscillating patterns are found to be in good agreement with $`L=38`$, which suggests $`L=J=38`$ dominance in the resonance, namely, misalignments of the fragment spins. Angular distributions of $`\gamma `$-rays from $`{}_{}{}^{28}\mathrm{Si}`$ first excited state to the ground state have been also measured in coincidence with two $`{}_{}{}^{28}\mathrm{Si}`$ nuclei detected at $`\theta _{\mathrm{CM}}=90^{}`$. Figure 2 displays $`\gamma `$-ray intensities in the three panels, where quantizations are made for $`z`$-axis to be parallel to the beam direction in (a), $`z`$-axis parallel to normal to the scattering plane in (b) and $`z`$-axis parallel to the fragment direction in (c). The angular distribution in (b) shows characteristic ”m=0” pattern, which suggests fragment spins $`𝐈_\mathrm{𝟏}`$ and $`𝐈_\mathrm{𝟐}`$ are on the scattering plane and is consistent with the misalignments observed in $`{}_{}{}^{28}\mathrm{Si}`$ angular distributions. Those features are much different from $`{}_{}{}^{12}\mathrm{C}+{}_{}{}^{12}\mathrm{C}`$ and $`{}_{}{}^{24}\mathrm{Mg}+{}_{}{}^{24}\mathrm{Mg}`$ systems, which exhibit spin alignments. The aim of the present paper is to clarify the mechanism of the appearance of spin misalignments in the resonance of $`{}_{}{}^{28}\mathrm{Si}+{}_{}{}^{28}\mathrm{Si}`$ system. ## 2 Di-nuclear Structure of $`{}_{}{}^{\mathrm{𝟐𝟖}}\mathrm{𝐒𝐢}+{}_{}{}^{\mathrm{𝟐𝟖}}\mathrm{𝐒𝐢}`$ System First, structures of the resonance states of $`{}_{}{}^{28}\mathrm{Si}+{}_{}{}^{28}\mathrm{Si}`$ system and their normal modes around a stable configuration are briefly revisited. For simplicity, assuming a constant deformation and axial symmetry of the constituent nuclei, we have seven degrees of freedom $`(q_i)=(\theta _1,\theta _2,\theta _3,R,\alpha ,\beta _1,\beta _2)`$, as illustrated in Fig. 3(a). Consistently with the coordinate system, we introduce a rotation-vibration type wave function as basis one, $$\mathrm{\Psi }_\lambda D_{MK}^J(\theta _i)\chi _K(R,\alpha ,\beta _1,\beta _2),$$ (1) where $`\chi _K`$ describes internal motions. Dynamics of the internal motions have been solved around the equilibrium and various normal modes such as butterfly vibrations have been obtained, as is shown in Fig. 4(a). Each mode has a characteristic feature with respect to the spin alignments. For example, butterfly motion shows the total intrinsic spin $`I=0`$ dominance, namely, anti-alignment, which is very consistent with the fragments angular distributions. However the indication by the $`\gamma `$-ray measurements that the spin vectors $`𝐈_1`$ and $`𝐈_\mathrm{𝟐}`$ are both on the scattering plane provides more detailed information. In the following, we introduce a new mode in order to explain $`\gamma `$-rays data suggesting the fragments spin components ”$`m=0`$”. With extremely high spins such as $`3040\mathrm{}`$, stable di-nuclear configurations tend to be ”elongated systems by the strong centrifugal force”. In the prolate-prolate systems, a stable configuration is pole-to-pole one, while in the oblate-oblate systems, it is an equator-to-equator one. The former has axial symmetry as a whole, but the latter has axial asymmetry, as is displayed in Fig. 3(b). What would be expected from the difference of the symmetries? Approximately a triaxial system rotates around the axis with the maximum moments of inertia. In the oblate-oblate systems, thus two pancakes-like objects touching side-by-side as in the lower panel of Fig. 3(b) rotate around $`x`$-axis which is parallel to the normal to the reaction plane. The axial asymmetry, however, gives rise to a mixing of $`K`$-quantum numbers, which is different from the axial symmetric prolate-prolate cases. In the high spin limit ($`K/J0`$), the diagonalization in the $`K`$-space is found to be equivalent to solving a differential equation of the harmonic oscillator with parameters given by the moments of inertia. Thereby, the solution is a gaussian, or a gaussian multiplied by an Hermite polynomial, $$f_n(K)=H_n(\frac{K}{b})\mathrm{exp}\left[\frac{1}{2}\left(\frac{K}{b}\right)^2\right],$$ (2) with the width of $`b=(2J^2I_K/\mathrm{\Delta })^{1/4}`$, where $`I_K^1=I_z^1I_{\mathrm{av}}^1`$ and $`\mathrm{\Delta }^1=I_y^1I_{\mathrm{av}}^1`$ with $`I_{\mathrm{av}}^1=\frac{1}{2}(I_x^1+I_y^1)`$. The resultant energy spectrum is displayed in Fig.4(b) compared with the spectrum without $`K`$-mixings in Fig.4(a). Consequences of the $`K`$-mixings to fragment spin orientations and then to the $`\gamma `$-ray distributions are discussed in the next section. ## 3 Spin Alignments We define scattering waves and the collision matrix $`U_{c^{}c}`$ such as $$\psi (G_ciF_c)\underset{c^{}}{}U_{c^{}c}(G_c^{}+iF_c^{}).$$ (3) By using the $`R`$-matrix formula with one level approximation, we obtain $`U_{c^{}c}=e^{i\varphi _l^{}^{}}(2\sqrt{2P_{c^{}l^{}}}\gamma _{c^{}l^{}}\sqrt{2P_{cl}}\gamma _{cl})e^{i\varphi _l}/\mathrm{\Gamma }_{\mathrm{total}}`$ for the inelastic scattering, where the reduced widths $`\gamma _{cl}`$ are calculated from the model wave functions. The scattering amplitudes with specified magnetic substates are given by $`A_{m_1m_2}(𝐤^{},𝐤)`$ $`=`$ $`{\displaystyle \frac{2\pi }{ik}}{\displaystyle \underset{L^{},S^{},M}{}}(22m_1m_2|S^{}M_S^{})(L^{}S^{}m^{}M_S^{}|JM)`$ (4) $`\times i^{JL^{}}e^{i(\sigma _J+\sigma _L^{}^{})}U_{L^{}S^{}}^JY_{JM}^{}(\widehat{k})Y_{L^{}m^{}}(\widehat{k}^{}).`$ The transition amplitudes for the $`\gamma `$-ray emissions from the polarized nuclei are discussed by several authors (see for examlpe Ref. 8). Note that in the experiment, only one of two emitted photons is detected in most cases, even with EUROGAM. Note also that the intensities of the detectors are averaged over the azimuthal angle $`\varphi _\gamma `$ in the data shown in Fig. 2 and then are expressed as $`W(\theta _\gamma )=_mP_mW_m(\theta _\gamma )`$, where $`P_m`$ denotes the probability in the $`m`$ magnetic substate. Now we calculate $`P_m`$’s and $`W(\theta _\gamma )`$’s with the molecular model. Combining with the lowest state of the tilting mode in Eq. (2), i.e., $`f_0(K)\mathrm{exp}(K^2/2b^2)`$, we introduce a refined wave function, $$\mathrm{\Psi }_\lambda ^{JM}\underset{K}{}\mathrm{exp}(K^2/2b^2)D_{MK}^J(\theta _i)\chi _K(R,\alpha ,\beta _1,\beta _2),$$ (5) where in general, $`\chi _K`$ can be any excitation mode. A simple choice is that all the internal motions are zero-point ones. In Fig. 2, theoretical results for $`b=1.3`$ are shown by dotted lines, which are seen to be in good agreement with the data, in all the three axes (a), (b) and (c). This value of $`b`$ is consistent with the di-nuclear configuration of $`{}_{}{}^{28}\mathrm{Si}+{}_{}{}^{28}\mathrm{Si}`$ system. ## 4 Concluding Remarks Differences between the oblate-oblate system($`{}_{}{}^{28}\mathrm{Si}+{}_{}{}^{28}\mathrm{Si}`$) and the prolate-prolate system ($`{}_{}{}^{24}\mathrm{Mg}+{}_{}{}^{24}\mathrm{Mg}`$) has been manifested. Corresponding to the axial asymmetry by the equilibrium shape of $`{}_{}{}^{28}\mathrm{Si}+{}_{}{}^{28}\mathrm{Si}`$ system, we have introduced $`K`$-mixings and have succeeded to explain the characteristic features for the spin misalignments. Those $`K`$-mixings, namely, the tilting mode would be a new facet in nuclear resonance phenomena. Further experimental study is strongly desired. ## Acknowledgments The authors are grateful to Dr.’s R. Nouicer and C. Beck for the informations on the experimental data and stimulating discussion.
no-problem/9909/cond-mat9909242.html
ar5iv
text
# Scale-free energy dissipation and dynamic phase transition in stochastic sandpiles ## I Introduction Open driven systems exhibiting self-organized critical states are usually modeled by sandpile-type automata, in which sand grains are added slowly to the system and its evolution is monitored in terms of collective sandslides (avalanches). With additional complexity due to stochastic character of the relaxation rules , stochastic cellular automata models have proven useful in understanding certain aspects of granular flow in realistic granular materials and in stochastic biological processes . On the other hand, sandpile automata models are interesting from the theoretical point of view since both role of the dynamic conservation law—conservation of the number of grains—and emergent spatial structures can be easily monitored. Recently a model with probabilistic toppling and directed mass flow has been proposed , in which the probability of toppling $`p`$ represents a control parameter originating either from random variations of sticking properties between grains, or from stochastic external conditions (wetting and drying properties). Another realization is related to stochastic processes in biological systems such as random dispersion of particles, which are added from the outside and evacuated from the system only when its response lasts longer than a fixed time $`T_0`$, measured on the internal time scale of the process. Particles are transferred among communicating cells according to probabilistic rules, however, for response times shorter than $`T_0`$ they are held inside the system. Therefore, each cell contains a certain number of particles, which varies with time. The probabilistic character of the particle transfer between connected cells can be attributed to mechanisms which depend on general condition of the system. We consider the case $`T_0L`$, where $`L`$ is the linear system size. The relaxation rules of the model in $`d=1+1`$ dimensions are: if $`h(i,j)h_c=2`$ then with the probability $`p`$ the relaxation occurs as follows: $$h(i,j)h(i,j)2;h(i+1,j_\pm )h(i+1,j_\pm )+1.$$ (1) Here $`h(i,j)`$ is the dynamic variable, i.e., the height (number of particles) at site $`(i,j)`$, and $`(i+1,j_\pm )`$ are neighboring downstream sites on a 2-dimensional square lattice oriented downwards. Due to directed mass transport the dynamics of this model is anisotropic leading to self-affine relaxation clusters in $`d=2`$ (see Fig. 1). Thus the model can also be viewed as directional lines in $`d=1+1`$ dimensions, in which the instability can propagate in one spatial dimension back and forth, whereas the temporal dimension is strictly directed. The system is driven by adding particles from the outside, one at a time at a random position along the first (top) row. Only sites which are connected to toppled sites at the previous time step are considered as candidates for toppling. Perturbation can then be transferred from an active (toppled) site to two forward neighbors. Periodic transverse boundaries are assumed and all candidate sites are updated in parallel. The probabilistic character of the relaxation rules produces a ragged structure of heights. \[A transverse section through the pile is shown in Fig. 1 (top)\]. Two top particles from the surface $`h(i,j)`$ are taken away from the system when the active site is on the lower boundary, i.e., $`i=L`$. It should be stressed that according to the above relaxation rules transport of grains is independent of the relative heights of neighboring sites. Therefore, it may occur that at some sites height difference in the direction of transport is negative, and thus the system performs work in order to maintain the transport. A nice example of this dynamic rules was found recently in biological transport processes which are mediated by so called molecular motors . These are protein molecules that can use excess energy from the chemical reactions in the fuel (adenosine triphosphate) and perform mechanical work. It has been understood that the automaton exhibits self-organized criticality for the range of values of the control parameter $`pp^{}`$, where $`p^{}=p_c^{SDP}=0.7054853(5)`$ is the percolation threshold for the site directed percolation on the square lattice. Due to the dynamic conservation law—conserved number of grains in the interior of the pile—and the probabilistic relaxation rules which are locally like the generalized site-bond directed percolation (see below), the system exhibits emergent spatial structure, as discussed in detail in Ref. . Moreover, the avalanche exponents for the integrated probability distributions of duration $`P(T)T^{(\tau _t1)}`$ and size of avalanches, $`D(s)s^{(\tau _s1)}`$, are expressible in terms of the standard directed percolation exponents in $`d=1+1`$ dimensions as follows : $$\tau _t1\alpha =(d1)\zeta _{DP}\left(\beta /\nu _{}\right)_{DP},$$ (2) and $`\tau _s=21/\tau _t`$, and the anisotropy exponent $`\zeta =\zeta _{DP}`$. Subscript $`DP`$ refers to directed percolation, and $`\beta `$ and $`\nu _{}`$ are the critical exponents for the order parameter and parallel correlation length, respectively. In the present work we extend the study of the model of Ref. in two ways: (1) We study the probability distribution of the potential energy dissipated in a relaxation event (avalanche) due to grains drop from higher to lower positions at the fluctuating sandpile surface. This distribution is unique for the dynamic sandpile models with ragged spatial structure and has no counterpart in the directed percolation processes. Thus we expect that the exponents characterizing its scaling properties are new. (2) We analyze the behavior of the system close to the phase transition point in terms of $`(i)`$ scaling properties of the survival probability distribution for $`p<p^{}`$ and $`(ii)`$ by determining the time averaged outflow current $`<J(p)>`$, which behaves as an order parameter. The outflow current results from avalanches which last longer than the system size $`TL`$. The average is taken over external time scale, which is measured in number of added grains. The internal current at time $`T<L`$ is defined as $`<j(T,p)>T^\alpha m(T,p)`$, where $`m(T,p)`$ is the average flux of particles at time $`T`$, and $`T^\alpha `$ is the probability that an avalanche survives $`T`$ steps. In the steady state outflow current $`<J(p)>`$ balances the input current, which is one particle per time step, and thus it is equal to one. For $`p<p^{}`$ the system ceases to conduct particles, and the outflow current drops to zero for $`\mathrm{}\xi (p)`$ and finite lattice size $`L`$ as $$<J(p)>(\delta p)^\beta g(L/\xi (p)),$$ (3) where $`\delta p(pp^{})/p^{}`$ measures the distance from the steady state and $`\xi (p)(\delta p)^\nu _{}`$ is the parallel correlation length. The set of critical exponents is determined by the appropriate scaling fits and using the scaling relations that are valid in the present dynamical model (see Sections IV and V). The organization of the paper is as follows: In Sec. II the phase diagram of the system is calculated numerically for finite lattice size $`L`$. In Sec. III we study scaling properties of the distribution of energy dissipated in avalanches. In Sec. IV we present detailed numerical analysis of the phase transition. Summary of the universal scaling exponents and the discussion of the results is given in Sec. V. ## II Phase diagram Due to probabilistic dynamic rules the avalanches in this model show a fractal structure; an example is shown in Fig. 1 (bottom). In the limit $`p=1`$ this model reduces to the deterministic directed model with compact avalanches, which has been introduced and solved exactly in Ref. . As discussed in detail in Ref. , for $`p^{}p<1`$ the relaxation rules at each site of the system may be visualized as the rules of a Domany-Kinzel cellular automaton of generalized site-bond directed percolation (DP), with probabilities $`P_1`$ and $`P_2`$ that a toppling occurs if one or two particles, respectively, drop at that site. According to Eq. (1), we have $`P_1p\rho `$, where $`\rho `$ is the probability that the site has height $`h1`$, and $`P_2p`$ by definition. In contrast to DP, in the present dynamic model the state is being systematically built up after each avalanche, and the probability $`\rho `$ was found to vary with the distance $`\mathrm{}`$ from the top of pile as $`\rho (\mathrm{},p)=\rho ^{}(p)A(p)\mathrm{}^x`$. Here $`x=1/\nu _{}^{DP}`$ (see inset to Fig. 2) is the inverse parallel correlation length exponent of the directed percolation . In the above formula $`\rho ^{}(p)`$ are the values of $`\rho (\mathrm{},p)`$ reached at $`\mathrm{}\mathrm{}`$. Notice that the distance $`\mathrm{}`$ from the top of the pile is equivalent to the duration $`T`$ of avalanches. In Fig. 2 we show time-averaged $`<\rho ^{}(p)>`$ vs. $`p`$ obtained numerically and averaged over lower third of the lattice with $`L=100`$, for various values of $`p`$. Two types of initial conditions are used: ($`a`$) full lattice (all lattice sites are occupied by at least one particle), and ($`b`$) half-full lattice (half of the sites which are selected randomly have zero heights and the rest of sites are occupied). In both cases, for the initial set of probabilities $`(p,\rho ^{})`$ in the region to the right of $`p^{}`$ the system self-organizes (after some transient time) to sitting close to the DP critical line (cf. Fig. 2). Left of the line $`p=p^{}`$ different initial conditions lead to separate final states. We used $`8\times 10^6`$ time steps for each point. Notice that due to finite size of the lattice $`<\rho ^{}(p^{})>`$ is still somewhat smaller than one (indicated by dotted line in Fig. 2), and that a dynamical hysteresis occurs in the region $`0.5p<p^{}`$ . At $`p^{}`$ an instability—building up of heights—starts at lower boundary of the pile and proliferates inside, reaching the first row for probability of toppling exactly $`p=1/2`$. Apart from the finite-size effects, the phase diagram in Fig. 2 is in agreement with general theoretical considerations given in Ref. . ## III Dissipated energy distribution Due to the probabilistic character of the dynamic relaxation rules in Eq. (1) for $`p<1`$ and the conservation of particles in the interior of the pile, our dynamic model exhibits the emergent spatial structure , which is characterized by rough surface $`h(i,j)`$ embeded in 3-dimensional space. Therefore, mass transport in the preferred direction takes place along a rough surface. Conditions for the potential energy dissipation are fulfilled when the height difference $`_{}h_\pm h(i,j)h(i+1,j_\pm )1`$ along the direction of transport is positive. More precisely, since relaxation at a site involves two particles, the energy is dissipated at that site when the sum E$`(i,j)_{}h_{}+_{}h_+1>0`$. Two comments are in order at this point: (1) Although the driving force of the grain transport in this model is not a gradient of the potential energy, as discussed in the Introduction, we believe that cumulative potential energy dissipated in an avalanche is an interesting quantity which is directly related to temporal fluctuations of the emergent structure in the real space. We concentrate on the properties of the energy distribution near the phase transition point $`p=p^{}`$, where the surface exhibits dramatic fluctuations, and only briefly discuss scaling behavior for $`p>p^{}`$. (2) We distinguish between energy dissipated at a fixed site, through which various avalanches run, and the energy dissipated at different points in the whole avalanche. Here we study scaling properties of the latter quantity. Notice that the condition $`E(i,j)>0`$ is fulfilled at a set of points $`𝒮`$ which is a random subset (see Fig. 1) of the avalanche size $`s`$ . Total energy dissipated in an avalanche is then $`E=_𝒮E(i,j)`$. The probability distribution of the dissipated energy $`P(E)`$ is found to obey a power-law behavior with the exponent $`\tau _E`$ for $`p^{}p<1`$ and the following scaling form is satisfied $$P(E,L)=L^{(\tau _E1)D_E}𝒫(EL^{D_E})$$ (4) with $`(\tau _E1)D_E=\tau _t1`$, where $`\tau _t1\alpha `$ is the survival probability distribution exponent. In Fig. 3 we show the integrated distribution of dissipated energies for $`p=p^{}`$ and for four different values of lattice size $`L`$. The slope gives the exponent $`\tau _E1=0.24`$, and the finite-size scaling plot according to (4), which is shown in the inset to Fig. 3, is obtained with $`\alpha =`$0.45 and $`D_E=`$1.84. In contrast to the survival probability distribution and size distribution of avalanches, the distribution of dissipated energy cannot be defined in the directed percolation processes, and thus the exponents $`\tau _E`$ and $`D_E`$ are new and are not directly related to the DP exponents. Moreover, we find that the exponents $`\tau _E`$ and $`D_E`$ are $`p`$-dependent, for instance, for $`p=`$0.8 we obtain $`\tau _E=`$ 1.27 and $`D_E=`$ 1.66, and $`\tau _E=`$ 1.29 and $`D_E=`$ 1.55 for $`p=`$0.9. However, a combination of these exponents can be related to the survival probability exponent $`\alpha `$ via the scaling relation $`(\tau _E1)D_E=\alpha `$, which holds in the SOC states, where $`\alpha `$ is the universal exponent expressible in terms of DP exponents via Eq. (2). This scaling relation is satisfied within numerical error bars (estimated as $`\pm 0.02`$) for all values of $`p`$ in the region $`p^{}p<1`$. In the limit $`p=`$1 the critical state is exactly known and consists only of the heights $`h=1`$ and $`h=0`$. Consequently, dissipated energy is bounded to values fife integer values, and the distribution $`P(E)`$ has the same scaling exponents as the size of avalanches distribution. For $`p^{}p<1`$ emergent spatial structure appears due to both stochastic dynamics and the conservation of number of particles in the interior of the pile . At the edge of the scaling region ($`p=p^{}`$) we find that the average height (averaged in the transverse direction) increases with the distance $`\mathrm{}`$ from the top row as $`<h(\mathrm{})>a\mathrm{}^b`$, with $`b=0.59\pm 0.02`$ (cf. Fig. 1 (top)). Therefore, for large $`\mathrm{}`$ there is a finite probability of large heights, however, at the same distance $`\mathrm{}`$ some site have height zero, since the system is in the stationary critical state (in the opposite the avalanche would propagate as a directed percolation cluster, which violates stationarity condition). Thus, $`_{}h`$ also increases with $`\mathrm{}`$ and becomes unbounded for $`\mathrm{}\mathrm{}`$. Thus, the energy cutoff has additional nontrivial $`\mathrm{}`$-dependence, which is not contained in the $`\mathrm{}`$-dependence of the avalanche size cutoff, indicating that the sandpile surface is a fractal at $`p=p^{}`$. In the interior of the scaling region ($`p^{}<p<1`$), the average height remains finite and not a function of $`\mathrm{}`$, however, height distribution does strongly depend on $`p`$. We find that the width of the height distribution $`w(p)`$ increases smoothly with decreasing $`p`$ from $`w=1`$ at $`p=1`$ to a fast diverging function at $`pp^{}`$. Neighboring sites are weakly correlated since the dynamics is governed by the critical height rule only, and thus dissipated energy at a site $`E(i,j)`$ also depends on $`p`$. We checked by direct calculation that the distribution of dissipated energy at a fixed site in the interior of the pile, $`P(E_{site})`$, taken over $`2\times 10^6`$ avalanches exhibits strong $`p`$-dependence. It is an exponential function of width $`w_{es}`$, which is increasing smoothly with decreasing $`p`$ and becomes nearly power-law at $`p=p^{}`$. We believe that the $`p`$-dependence of the height distribution is the origin of the observed nonuniversality of the energy exponents. On the other hand, size and duration of avalanches are governed by the probabilities $`p`$ and $`\rho ^{}`$, which sit always at the DK critical line in Fig. 2. It should be noticed that the probability $`\rho (\mathrm{})`$ does not depend on particular values of heights $`h>1`$, and thus the avalanche exponents remain universal (cf. Eq. (2)). We find that the distribution of mechanical work done by the system exhibits a curvature and not a power-law behavior. ## IV Dynamic phase transition In the region below $`p^{}`$ the system ceases to conduct and starts accumulating particles. As a consequence the critical steady state is lost (see detailed discussion in Ref. ) and the probability distributions exhibit exponential cut-offs with finite correlation length, depending on the distance from $`p^{}`$. In Fig. 4 we show the survival probability distribution $`P(T,p)`$ for few values of $`p<p^{}`$ and $`L=`$200. In general, a distribution $`P(X,p,L)`$ satisfies the following scaling form in the subcritical region $$P(X,p,L)=\left(\delta p\right)^{D_X\nu _{}\tau _X}𝒫(X\left(\delta p\right)^{D_X\nu _{}},XL^{D_X}),$$ (5) where $`\delta p(p^{}p)/p^{}`$ and $`X`$ stand for $`T`$, $`s`$, or $`E`$, respectively, and $`D_X`$ is the corresponding fractal dimension. In the case of distribution of durations $`P(T,p,L)`$ we have $`D_Tz`$ the dynamic exponent, and $`z=1`$ in the present model. Therefore there are no finite-size effects in the survival probability distribution, which makes it particularly suitable for the subcritical scaling analysis. In the case of size and energy distributions, one is restricted to values of $`p`$ and $`L`$ such that the condition $`(\delta p)^\nu _{}/L1`$ is satisfied. In the inset to Fig. 4 the scaling collapse according to Eq. (5) of the survival probability distribution is shown, where we have used $`\alpha =0.45`$ and $`z\nu _{}=1.22`$. Another way to study the dynamic phase transition is by direct measurements of the order parameter, i.e., the time-averaged outflow current $`<J(p)>`$. In the critical steady state $`<J(p)>=1`$, thus balancing the average input current. Below the transition point this balance is lost. The outflow current decreases reaching zero at some lower value of $`p`$, which depends on the system size $`L`$ \[see Fig. 5 (bottom)\]. For different lattice sizes we expect the following scaling form to hold: $$<J(p,L)>=L^{\beta /\nu _{}}𝒥(L^{1/\nu _{}}(pp^{})/p^{}).$$ (6) This scaling form follows from general scaling properties of the internal current for $`T<L`$, that reads: $`<j(T,p,L)>L^{\lambda _J}j(L^{1/\nu _{}}(pp^{})/p^{},L^zT)`$. By choosing $`L(p/p^{}1)^\nu _{}\xi `$ and having defined the exponent $`\beta `$ in Eq. (3), we find that the anomalous dimension $`\lambda _J=\beta /\nu _{}`$. Therefore $`<j(T,p,L)>=L^{\beta /\nu _{}}𝒢(L^{1/\nu _{}}(pp^{})/p^{},L^zT)`$. In the stationary state for $`pp^{}`$, correlation length $`\xi \mathrm{}`$ and the first argument in $`𝒢`$ can be neglected. For $`TL^z`$ we expect that the scaling function $`𝒢`$ behaves as a power, i.e., $`<j(T,p,L)>const\times T^\alpha L^{z\alpha \beta /\nu _{}}`$, which should be independent on $`L`$, thus leading to $`\beta /\nu _{}=z\alpha `$. For the outflow current, however, we have $`TL`$ and second argument of $`𝒢`$ can be neglected. Then for $`p<p^{}`$ one gets the expression (6). Taking $`L^{1/\nu _{}}(pp^{})/p^{}`$ leads to Eq. (3). The scaling plot according to Eq. (6) is shown in Fig. 5 (top) where we have $`\beta /\nu _{}=0.45`$ and $`1/\nu _{}=0.83`$. Together with the above results and observing the error bars for $`\alpha `$ from , we estimate the exponents as $`\nu _{}=1.22\pm 0.02`$ and $`\beta =0.56\pm 0.02`$. Notice that the above values of the exponents $`\nu _{}`$ and $`\beta `$ are close to the values $`1.28\pm 0.06`$ and $`0.58\pm 0.06`$, respectively, obtained by Monte Carlo simulations for $`d=3`$ dimensional directed percolation in Ref. . The reason for this similarity lies in the altered character of the dynamics of our model below the transition. Namely, the probability $`\rho ^{}`$ of having height $`h1`$ reaches unity (in the limit of large $`L`$) at $`p^{}`$. The consequences of this are twofold: ($`i`$) the threshold character of the dynamics is lost for $`p<p^{}`$, i.e., each site in the lattice satisfies the threshold condition $`hh_c=2`$ when a single particle drops on that site; ($`ii`$) since $`\rho =1`$ we have $`P1=P2=p`$, thus the system spreads the perturbation in the $`(R_{},R_{})`$ plane with the probability $`p`$ and builds up heights with probability $`q=1p`$. Therefore, for $`0.5<p<p^{}`$ we have a dynamic model in which an avalanche propagates effectively as a cluster in a 3-dimensional directed percolation with finite widths $`\xi _{},\xi _{}`$ in the plane, and percolating in the vertical direction. However, there are considerable differences between these processes and conventional 3-dimensional DP, leading to generally different set of exponents, as discussed below. An added particle moves along rough surface $`h(R_{},R_{})`$, which fluctuates inside the correlated region $`(\xi _{},\xi _{})`$. The internal time scale becomes bounded by finite $`\xi _{}`$, and the system percolates for $`t\mathrm{}`$, where $`t`$ is now the external time scale (measured by the number of added particles). Flights of particles along the rough surface are proportional to local height gradients $`_{}h`$, which are usually larger than one, in contrast to contact percolation processes. However, the average height of the pile $`<h>`$ grows exactly by one unit with each added particle as a consequence of the conservation of number of particles. ## V Discussion and Conclusions The dynamic model with stochastic relaxation rules of Eq. (1) exhibits the universal self-organized criticality for the range of toppling probabilities $`pp^{}<1`$, and the dynamic phase transition at $`p=p^{}`$. As discussed in detail in Ref. , the avalanche exponents $`\alpha `$ for survival probability, $`\tau \tau _s1`$ for integrated cluster size distribution, and $`\zeta `$ for the average transverse extent of clusters, are expressible in term of standard directed percolation exponents in all dimensions via Eq. (2). Here we have shown that the dissipated energy distribution, which is peculiar to the dynamic model and has no analogue in the directed percolation processes, is described by a new exponent $`\tau _E`$ and corresponding fractal dimension $`D_E`$. These exponents appear to be $`p`$-dependent, however, their product is related to the universal survival probability exponent due to scaling relation $`(\tau _E1)D_E=\alpha `$. Power-law behavior of the distribution $`D(E)`$ at $`p=p^{}`$ indicates that the sandpile surface $`h(i,j)`$ is a fractal. We estimate the roughness exponent $`\chi `$ by measuring the time averaged height vs. transverse dimension of the pile $`<h(R_{tr})>R_{tr}^\chi `$ at various distances $`\mathrm{}`$ from the first row \[see Fig. 1 (top)\]. By box counting we find that the contour curve of the perpendicular section through the pile for $`p=p^{}`$ has fractal dimension $`d_f=1.44\pm 0.045`$, leading to $`\chi =d_f1=0.44\pm 0.04`$. Error bars are estimated from several separate measurements at different sections. The roughness exponent appears to be larger than the one measured in ricepile model with critical slope rules, where it was found $`\chi _{RP}=`$0.23 . In the steady state the height fluctuates around the average value $`<h>=19\pm 6`$, increasing by one unit at boundary sites of an avalanche and decreasing by one unit at transport sites with one preceding active neighbor. The flame-like profile in Fig. 1 (top) indicates individual site fluctuations, in agreement with the critical height rules in our model. Closeness of the exponents $`\chi \alpha `$ indicates that the number of sites at which the pile grows is on the average equal to the number of transport sites, i.e., the avalanches have almost no compact parts (cf. Fig. 1). Below the transition point the pile grows indefinitly for $`t\mathrm{}`$, as discussed in Sec. IV. The dynamic phase transition at $`p^{}`$ is characterized by the exponents of the order parameter $`\beta `$, and the parallel correlation length $`\nu _{}`$, the numerical values of which appear to be very close to those of the directed percolation in $`d+1`$ dimensions. However, the exponent $`\gamma `$ for the order-parameter fluctuations and the exponent $`\kappa `$, which describes the average cluster growth at time $`T`$ as $`m(T)T^\kappa `$, appear to be different from $`d=3`$ DP exponents (see Ref. ). A complete set of exponents is given in Table I. With regard to the exponents in Table I we would like to point out the following: (1) In the critical steady state the average cluster growth balances the average input of particles, i.e., $`m(T)1`$, leading to $`\kappa =0`$. The scaling relation $`\kappa =\gamma /\nu _{}1=D_{}1\alpha `$ holds, thus we have $`\gamma =\nu _{}`$; (2) At the dynamic phase transition the hyperscaling relation (HS) $`2\beta +\gamma =[1+(d1)\zeta ]\nu _{}`$ appears to be violated , in contrast to the thermodynamic DP phase transition (cf. Table I). The avalanche exponents are determined in Ref. . The exponents for DP in $`d=2`$ are taken from Ref. and the corresponding avalanche exponents $`\tau _{DP}`$ and $`D_{DP}`$ are calculated using the scaling relations. (Notice that due to the presence of anisotropy, at least three exponents should be known in order to determine completely the universality class.) The following scaling relations are valid both in DP and in dynamic models: $`\beta /\nu _{}=\tau _t1\alpha `$; $`\beta /(\beta +\gamma )=\tau _s1\tau `$; $`\zeta =\nu _{}/\nu _{}`$, and $`D_{}\nu _{}=\beta +\gamma `$. The hyperscaling violation exponent $`\mathrm{\Omega }`$ is defined via $`\beta +\gamma =[1+(d1)\zeta \beta /\nu _{}+\mathrm{\Omega }]\nu _{}`$, which together with the above equations leads to $`\mathrm{\Omega }=D_{}1+\alpha (d1)\zeta `$. Here $`D_{}`$ is the fractal dimension of the size of relaxation clusters measured with respect to the length parallel to the transport direction. Using the fact that $`D_{}=1+\alpha `$ in the dynamic model, we may write $`\mathrm{\Omega }=2\alpha (d1)\zeta `$. In the directed dynamic processes it is useful to define another exponent $`\beta ^{}`$, such that the generalized hyperscaling relation $$\alpha (1+\beta ^{}/\beta )+\kappa =(d1)\zeta ,$$ (7) is satisfied. Here $`\beta ^{}`$ is related to the ultimate survival probability (the survival probability of a cluster grown from a fixed seed), whereas $`\beta `$ governs the usual order-parameter—stationary density of active sites. Recently exponential inequality $`\beta ^{}\beta `$ was found in models with multiple absorbing configurations and in branching annihilating random walks with even parity . Our results in this paper suggest that in the self-organized dynamic critical states in directed models $`\kappa 0`$ and $`\beta \beta ^{}`$ is always satisfied. We have $$\mathrm{\Omega }=\frac{\beta }{\nu _{}}\frac{\beta ^{}}{\nu _{}}>0.$$ (8) Using the above scaling relations and Eq. (2) we find that $`\beta ^{}/\nu _{}=\alpha _{DP}`$=0.159 (cf. Table I). Therefore, the HS violation exponent $`\mathrm{\Omega }`$ is given by the difference between the survival probability exponents in the dynamic model and the underlying directed percolation as $`\mathrm{\Omega }=\alpha \alpha _{DP}`$, which turns to be equal to the cluster growth exponent of DP, i.e., $`\mathrm{\Omega }=\kappa _{DP}`$. One can also define a new configuration exponent $`\gamma ^{}`$ via $`\gamma ^{}/\nu _{}\gamma /\nu _{}+\mathrm{\Omega }`$, such that the relation $`2\beta ^{}+\gamma ^{}=(1+(d1)\zeta )\nu _{}`$ is satisfied. By inserting the above expression for $`\mathrm{\Omega }`$ into $`\gamma ^{}`$, we have $`\gamma ^{}/\nu _{}=1+\kappa _{DP}=1.314`$. Using value of $`\nu _{}`$ from Table 1 we find $`\gamma ^{}=1.586\pm 0.026`$, and $`\beta ^{}=0.195\pm 0.012`$. The origin of the difference between $`\alpha `$ and $`\alpha _{DP}`$ lies in the dynamic conservation law, which leads to the emergent spatial structure and dependence of the branching probability $`\rho (\mathrm{})`$ on distance $`\mathrm{}`$, as discussed in Ref. . Our present results suggest that the dynamic conservation law is also responsible for the new universality class of the dynamic phase transition at the edge of the critical region. ###### Acknowledgements. I thank Deepak Dhar for fruitful discussions and suggestions which led to the results presented in Fig. 2. I also thank Maya Pazsuski for helpful comments and suggestions. This work was supported by the Ministry of Science and Technology of the Republic of Slovenia.
no-problem/9909/nucl-th9909027.html
ar5iv
text
# 1 INTRODUCTION ## 1 INTRODUCTION The electromagnetic (e.m.) current operator and the states of a system must have correct transformation properties with respect to the same representation of the Poincaré group. For systems of interacting particles some of the Poincaré generators are interaction dependent and, therefore, it is not a simple task to construct a current operator, able to fulfill the proper Poincaré, parity and time reversal covariance, as well as current conservation, hermiticity and charge normalization. Instead of including perturbatively the relativistic properties through $`p/E`$ series, or considering field-theoretic approaches, we adopt the front-form Hamiltonian dynamics with a fixed number of particles, which includes relativity in a coherent, non-perturbative way and allows one to retain a large amount of the successfull phenomenology developed within the ”non-relativistic” domain. In this dynamics seven, out of ten, Poincaré generators are interaction free. In particular, the Lorentz boosts are interaction free and the states can be written as products of total momentum eigenstates times intrinsic eigenstates. In the two-body case, if the mass operator, $`\stackrel{~}{M}`$, for the intrinsic functions is defined according to $`\stackrel{~}{M}^2=M_0^2+V`$ (with $`M_0`$ the free mass operator and $`V`$ the interaction operator), then the mass equation has the same form as the nonrelativistic Schroedinger equation in momentum representation . In Ref. (a) it was shown that all the requirements of Poincaré covariance can be satisfied, if, in the Breit frame where the momentum transfer, $`\stackrel{}{q}`$, is directed along the spin-quantization axis, $`z`$, the current operator is covariant with respect to rotations around $`z`$. Since in the front form the rotations around the $`z`$ axis are kinematical, the extended Poincaré covariance (i.e., Poincaré covariance plus parity and time reversal covariance) is satisfied by a current operator which in our Breit frame is given by the sum of free, one-body currents (i.e., $`J^\mu (0)=_{i=1}^Nj_{free,i}^\mu `$, with N the number of constituents in the system). The hermiticity can be easily implemented and, in the elastic case, the extended Poincaré covariance plus hermiticity imply current conservation (a,e). Our Poincaré covariant current operator has been already succesfully tested in the case of deep inelastic scattering (b). In this paper we calculate deuteron e.m. properties; more details for the magnetic moment and the quadrupole moment and other results for the deuteron elastic form factors (f.f.) can be found in Refs. (e) and (c,d), respectively. ## 2 DEUTERON ELECTROMAGNETIC FORM FACTORS In the elastic case, for a system of spin $`S`$ one has only $`2S+1`$ non-zero independent matrix elements for the current defined in Ref. (a), corresponding to the $`2S+1`$ elastic form factors. Then the extraction of elastic e.m. form factors is no more plagued by the ambiguities which are present when, as usual, the free current is considered in the reference frame where $`q^+=q_o+q_z=0`$ (indeed, if the current is taken free in the $`q^+=0`$ frame, one has four independent matrix elements to calculate the three deuteron f.f. ). Our results for the deuteron magnetic and quadrupole moments, corresponding to different $`NN`$ interactions, are reported in Fig. 1, together with the non relativistic ones, against the deuteron asymptotic normalization ratio $`\eta =A_D/A_S`$. A remarkable linear behaviour appears for both quantities and in our Poincaré covariant calculation the relativistic effects bring both $`\mu _d`$ and $`Q_d`$ closer to the experimental values, except for the charge-dependent Bonn interaction . In Fig. 2 we report our results for the deuteron f.f. $`A(Q^2)`$ and $`B(Q^2)`$. For $`A(Q^2)`$ the dependence on the nucleon f.f. is very high, stronger than the effect of different $`NN`$ interactions (see also Refs. (c,d)). If the poorly known neutron electromagnetic structure is properly fitted, the overall behaviour of the experimental data can be reproduced, in particular the recent $`A(Q^2)`$ data of Refs. and (open dots and upward triangles, respectively, in Fig. 2(a)) are well described. With respect to the nucleon f.f. model of Ref. , small changes are obtained for $`G_M^n`$ and higher values for $`G_E^n`$ in the range $`0.51(GeV/c)^2`$. The tensor polarization $`T_{20}(Q^2)`$ has a considerable dependence on the interaction (c,d) and only a weak dependence on the nucleon f.f., as well known, so that it cannot be described, together with $`A(Q^2)`$ and $`B(Q^2)`$, by a simple fit of the neutron form factors. In order to obtain a more precise description of the data one should explicitly introduce two-body currents, which will have to fulfill separately the constraints of extended Poincaré covariance and hermiticity. A more stringent comparison with new TJNAF data for $`B(Q^2)`$ will be possible in the near future. ## 3 CONCLUSION In the Breit frame where the three-momentum transfer is directed along the spin quantization axis, the current operator has to be covariant for rotations around the $`z`$ axis. All the necessary requirements for extended Poincaré covariance, hermiticity, current conservation and charge normalization can be satisfied by a current obtained by free, one-body terms in that frame (a,e). We have applied our results to the calculation of the deuteron elastic f.f., without any ambiguity. We were able to obtain, for the first time in a covariant relativistic approach without ad hoc assumptions on the values of specific matrix elements of the current, unambiguos results for both $`\mu _d`$ and $`Q_d`$, close to the experimental data (e). The f.f. $`A(Q^2)`$ and $`B(Q^2)`$ show remarkable effects from different models for the nucleon f.f., while $`T_{20}(Q^2)`$ has an higher dependence on different $`NN`$ interactions. Our approach, based on the reduction of the whole complexity of the Poincaré covariance to the SU(2) symmetry (a), can represent a simple framework where to investigate the many-body terms to be added to the free current.
no-problem/9909/astro-ph9909402.html
ar5iv
text
# The stellar populations of spiral galaxies ## 1 Introduction It has long been known that there are systematic trends in the star formation histories (SFHs) of spiral galaxies; however, it has been notoriously difficult to quantify these trends and directly relate them to observable physical parameters. This difficulty stems from the age-metallicity degeneracy: the spectra of composite stellar populations are virtually identical if the percentage change in age or metallicity (Z) follows $`\mathrm{\Delta }\mathrm{age}/\mathrm{\Delta }\mathrm{Z}3/2`$ \[Worthey 1994\]. This age-metallicity degeneracy is split only for certain special combinations of spectral line indices or for limited combinations of broad band colours (Worthey 1994; de Jong 1996c; hereafter dJiv). In this paper, we use a combination of optical and near-infrared (near-IR) broad band colours to partially break the age-metallicity degeneracy and relate these changes in SFH with a galaxy’s observable physical parameters. The use of broad band colours is now a well established technique for probing the SFH of galaxy populations. In elliptical and S0 galaxies there is a strong relationship between galaxy colour and absolute magnitude \[Sandage & Visvanathan 1978, Larson, Tinsley & Caldwell 1980, Bower, Lucey & Ellis 1992, Terlevich et al. 1999\]: this is the so-called colour-magnitude relation (CMR). This correlation, which is also present at high redshift in both the cluster and the field \[Ellis et al. 1997, Stanford, Eisenhardt & Dickinson 1998, Kodama, Bower & Bell 1999\], seems to be driven by a metallicity-mass relationship in a predominantly old stellar population \[Bower, Lucey & Ellis 1992, Kodama et al. 1998, Bower, Kodama & Terlevich 1998\]. The same relationship can be explored for spiral galaxies: Peletier and de Grijs \[Peletier & de Grijs 1998\] determine a dust-free CMR for edge-on spirals. They find a tight CMR, with a steeper slope than for elliptical and S0 galaxies. Using stellar population synthesis models, they conclude that this is most naturally interpreted as indicating trends in both age and metallicity with magnitude, with faint spiral galaxies having both a younger age and a lower metallicity than brighter spirals. In the same vein, radially-resolved colour information can be used to investigate radial trends in SFH. In dJiv, radial colour gradients in spiral galaxies were found to be common and were found to be consistent predominantly with the effects of a stellar age gradient. This conclusion is supported by Abraham et al. \[Abraham et al. 1998\], who find that dust and metallicity gradients are ruled out as major contributors to the radial colour trends observed in high redshift spiral galaxies. Also in dJiv, late-type galaxies were found to be on average younger and more metal poor than early-type spirals. As late-type spirals are typically fainter than early-type galaxies (in terms of total luminosity and surface brightness; de Jong 1996b), these results are qualitatively consistent with those of Peletier & de Grijs \[Peletier & de Grijs 1998\]. However, due to the lack of suitable stellar population synthesis models at that time, the trends in global age and metallicity in the sample were difficult to meaningfully quantify and investigate. In this paper, we extend the analysis presented in dJiv by using the new stellar population synthesis models of Bruzual & Charlot (in preparation) and Kodama & Arimoto (1997), and by augmenting the sample with low-inclination galaxies from the Ursa Major cluster from Tully et al. (1996; TVPHW hereafter) and low surface brightness galaxies from the sample of Bell et al. (1999; hereafter Paper i). A maximum likelihood method is used to match all the available photometry of the sample galaxies to the colours predicted by these stellar population synthesis models, allowing investigation of trends in galaxy age and metallicity as a function of local and global observables. The plan of the paper is as follows. In section 2, we review the sample data and the stellar population synthesis and dust models used in this analysis. In section 3, we present the maximum-likelihood fitting method and review the construction of the error estimates. In section 4, we present the results and investigate correlations between our derived ages and metallicities and the galaxy observables. In section 5 we discuss some of the more important correlations and the limitations of the method, compare with literature data, and discuss plausible physical mechanisms for generating some of the correlations in section 4. Finally, we review our main conclusions in section 6. Note that we homogenise the distances of Paper i, dJiv and TVPHW to a value of $`H_0=65`$ kms<sup>-1</sup>. ## 2 The data, stellar population and dust models ### 2.1 The data In order to investigate SFH trends in spiral galaxies, it is important to explore as wide a range of galaxy types, magnitudes, sizes and surface brightnesses as possible. Furthermore, high-quality surface photometry in a number of optical and at least one near-IR passband is required. Accordingly, we include the samples of de Jong & van der Kruit (1994; dJi hereafter), TVPHW and Paper i in this study to cover a wide range of galaxy luminosities, surface brightnesses and sizes. The details of the observations and data reduction can be found in the above references; below, we briefly outline the sample properties and analysis techniques. The sample of undisturbed spiral galaxies described in dJi was selected from the UGC \[Nilson 1973\] to have red diameters of at least 2 arcmin and axial ratios greater than 0.625. The sample presented in Paper i was taken from a number of sources \[de Blok, van der Hulst & Bothun 1995, de Blok, McGaugh & van der Hulst 1996, O’Neil, Bothun & Cornell 1997a, O’Neil et al. 1997b, Sprayberry et al. 1995, Lauberts & Valentijn 1989, dJi\] to have low estimated blue central surface brightnesses $`\mu _{B,0}`$ 22.5 mag arcsec<sup>-2</sup> and diameters at the 25 B mag arcsec<sup>-2</sup> isophote larger than 16 arcsec. TVPHW selected their galaxies from the Ursa Major Cluster (with a velocity dispersion of only 148 km s<sup>-1</sup>, it is the least massive and most spiral-rich nearby cluster): the sample is complete down to at least a $`B`$ band magnitude of $``$ 14.5 mag, although their sample includes many galaxies fainter than that cutoff. We take galaxies from the three samples with photometric observations (generously defined, with maximum allowed calibration errors of 0.15) in at least two optical and one near-IR passband, axial ratios greater than 0.4, and accurate colours (with errors in a single passband due to sky subtraction less than 0.3 mag) available out to at least 1.5 $`K`$-band disc scale lengths. For the Ursa Major sample, we applied an additional selection criterion that the $`K^{}`$ band disc scale length must exceed 5 arcsec, to allow the construction of reasonably robust central colours (to avoid the worst of the effects of mismatches in seeing between the different passbands). This selection leaves a sample of 64 galaxies from dJi (omitting UGC 334, as we use the superior data from Paper i for this galaxy: note that we show the two sets of colours for UGC 334 connected with a dotted line in Figs. 35 to allow comparison), 23 galaxies from Paper i, and 34 galaxies from TVPHW (omitting NGC 3718 because of its highly peculiar morphology, NGC 3998 because of poor $`R`$ band calibration, and NGC 3896 due to its poor $`K^{}`$ band data). In this way, we have accumulated a sample of 121 low-inclination spiral galaxies with radially-resolved, high-quality optical and near-IR data. Radially-resolved colours from Paper i and dJiv were used in this study, and surface photometry from TVPHW was used to construct radially-resolved colours. Structural parameters and morphological types were taken from the sample’s source papers and de Jong (1996a; dJii hereafter). In the next two paragraphs we briefly outline the methods used to derive structural parameters and determine radially-resolved galaxy colours in the three source papers. Galaxy structural parameters were determined using different methods in each of the three source papers. For the sample taken from dJi, a two-dimensional exponential bulge-bar-disc decomposition was used \[dJii\]. Paper i uses a one-dimensional bulge-disc decomposition (with either an exponential or a r<sup>1/4</sup> law bulge). TVPHW use a ‘marking the disc’ fit, where the contribution of the bulge to the disc parameters is minimised by visually assessing where the contribution from the bulge component is negligible. Note that dJii shows that these methods all give comparable results to typically better than 20 per cent in terms of surface brightness and 10 per cent in terms of scale lengths, with little significant bias. Magnitudes for all galaxies were determined from extrapolation of the radial profiles with an exponential disc. The dominant source of error in the determination of the galaxy parameters is due to the uncertainty in sky level \[Paper i, dJii\]. In order to examine radial trends in the stellar populations and galaxy dust content, colours were determined in large independent radial bins to minimise the random errors in the surface photometry and small scale stellar population and dust fluctuations. Up to 7 radial bins (depending on the signal-to-noise in the galaxy images) were used: $`0r/h_K<0.5,0.5r/h_K<1.5,1.5r/h_K<2.5`$ and so on, where $`r`$ is the major-axis radius, and $`h_K`$ is the major-axis $`K`$-band exponential disc scale length. The ellipticities and position angles of the annuli used to perform the surface photometry were typically determined from the outermost isophotes, and the centroid of the brightest portion of each galaxy was adopted as the ellipse centre. Modulo calibration errors, the dominant source of uncertainty in the surface photometry is due to the uncertainty in the adopted sky level \[Paper i, dJiv\]. The galaxy photometry and colours were corrected for Galactic foreground extinction using the dust models of Schlegel, Finkbeiner and Davis \[Schlegel, Finkbeiner & Davis 1998\]. Ninety per cent of the galactic extinction corrections in $`B`$ band are smaller than 0.36 mag, and the largest extinction correction is 1.4 mag. We correct the $`K`$ band central surface brightness to the face-on orientation assuming that the galaxy is optically thin: $`\mu _0^i=\mu _0^{obs}+2.5\mathrm{log}_{10}(\mathrm{cos}i)`$, where $`\mu _0^i`$ is the inclination corrected surface brightness and $`\mu _0^{obs}`$ is the observed surface brightness. The inclination $`i`$ is derived from the ellipticity of the galaxy $`e`$ assuming an intrinsic axial ratio $`q_0`$ of 0.15 \[Holmberg 1958\]: $`\mathrm{cos}^2i=\{(1e)^2q_0^2\}/(1q_0^2)`$. Assuming that the galaxies are optically thin in $`K`$ should be a reasonable assumption as the extinction in $`K`$ is around ten times lower than the extinction in $`B`$ band. Note that we do not correct the observed galaxy colours for the effects of dust: the uncertainties that arise from this lack of correction are addressed in section 5.4. The gas fraction of our sample is estimated as follows. The $`K`$ band absolute magnitude is converted into a stellar mass using a constant stellar $`K`$ band mass to light ratio of 0.6 $`M_{\mathrm{}}/L_{\mathrm{}}`$ (c.f. Verheijen 1998; Chapter 6) and a solar $`K`$ band absolute magnitude of 3.41 \[Allen 1973\]. The H i mass is estimated using H i fluxes from the NASA/IPAC Extragalactic Database and the homogenised distances, increasing these masses by a factor of 1.33 to account for the unobserved helium fraction \[de Blok, McGaugh & van der Hulst 1996\]. We estimate molecular gas masses (also corrected for helium) using the ratio of molecular to atomic gas masses as a function of morphological type taken from Young & Knezek \[Young & Knezek 1989\]. The total gas mass is then divided by the sum of the gas and stellar masses to form the gas fraction. There are two main sources of systematic uncertainty in the gas fractions not reflected in their error bars. Firstly, stellar mass to light ratios carry some uncertainty: they depend sensitively on the SFH and on the effects of dust extinction. Our use of $`K`$ band to estimate the stellar masses to some extent alleviates these uncertainties; however, it must be borne in mind that the stellar masses may be quite uncertain. Secondly, the molecular gas mass is highly uncertain: uncertainty in the CO to H<sub>2</sub> ratio, considerable scatter in the molecular to atomic gas mass ratio within a given morphological type and the use of a type-dependent correction (instead of e.g. a more physically motivated surface brightness or magnitude dependent correction) all make the molecular gas mass a large uncertainty in constructing the gas fraction. This uncertainty could be better addressed by using only galaxies with known CO fluxes; however, this requirement would severely limit our sample size. As our main aim is to search for trends in age and metallicity (each of which is in itself quite uncertain) as a function of galaxy parameters, a large sample size is more important than a slightly more accurate gas fraction. Note that this gas fraction is only an estimate of the cold gas content of the galaxy: we do not consider any gas in an extended, hot halo in this paper as it is unlikely to directly participate in star formation, and therefore can be neglected for our purposes. ### 2.2 The stellar population models In order to place constraints on the stellar populations in spiral galaxies, the galaxy colours must be compared with the colours of stellar population synthesis (SPS) models. In this section, we outline the SPS models that we use, and discuss the assumptions that we make in this analysis. We furthermore discuss the uncertainties involved in the use of these assumptions and this colour-based technique. In order to get some quantitative idea of the inaccuracies introduced in our analysis by uncertainties in SPS models, we used two different SPS models in our analysis: the gissel98 implementation of Bruzual & Charlot (in preparation; hereafter BC98) and Kodama & Arimoto (1997; hereafter KA97). We use the multi-metallicity colour tracks of an evolving single burst stellar population, with a Salpeter \[Salpeter 1955\] stellar initial mass function (IMF) for both models, where the lower mass limit of the IMF was 0.1 M for both the BC98 and KA97 models, and the upper mass limit was 125 M for BC98 and 60 M for the KA97 model. We assume that the IMF does not vary as a function of time. We use simplified SFHs and fixed metallicities to explore the feasible parameter space in matching the model colours with the galaxy colours. Even though fixing the metallicities ignores chemical evolution, the parameter space that we have used allows the determination of relative galaxy ages and metallicities with a minimal set of assumptions. In this simple case, the integrated spectrum $`F_\lambda (t)`$ for a stellar population with an arbitrary star formation rate (SFR) $`\mathrm{\Psi }(t)`$ is easily obtained from the time-evolving spectrum of a single-burst stellar population with a given metallicity $`f_\lambda (t)`$ using the convolution integral \[Bruzual & Charlot 1993\]: $$F_\lambda (t)=_0^t\mathrm{\Psi }(tt^{})f_\lambda (t^{})𝑑t^{}.$$ (1) We use exponential SFHs, parameterised by the star formation timescale $`\tau `$. In this scenario, the SFR $`\mathrm{\Psi }(t)`$ is given by: $$\mathrm{\Psi }(t)=Be^{t/\tau },$$ (2) where $`B`$ is an arbitrary constant determining the total mass of the stellar population. In order to cover the entire range of colours in our sample, both exponentially decreasing and increasing SFHs must be considered. Therefore, $`\tau `$ is an inappropriate choice of parameterisation for the SFR, as in going from old to young stellar populations smoothly, $`\tau `$ progresses from 0 to $`\mathrm{}`$ for a constant SFR, then from $`\mathrm{}`$ to small negative values. We parameterise the SFH by the average age of the stellar population $`A`$, given by: $$A=\frac{_0^At\mathrm{\Psi }(t)𝑑t}{_0^A\mathrm{\Psi }(t)𝑑t}=A\tau \frac{1e^{A/\tau }(1+A/\tau )}{1e^{A/\tau }}$$ (3) for an exponential SFH, as given in Equation 2, where $`A`$ is the age of the oldest stars in the stellar population (signifying when star formation started). In our case, we take $`A=12`$ Gyr as the age of all galaxies, and parameterise the difference between stellar populations purely in terms of $`A`$. Clearly, the above assumptions that we use to construct a grid of model colours are rather simplistic. In particular, our assumption of a galaxy age of 12 Gyr and an exponential SFH, while allowing almost complete coverage of the colours observed in our sample, is unlikely to accurately reflect the broad range of SFHs in our diverse sample of galaxies. Some galaxies may be older or younger than 12 Gyr, and star formation is very likely to proceed in bursts, instead of varying smoothly as we assume above. Additional uncertainties stem from the use of a constant stellar metallicity and from the uncertainties inherent to the SPS models themselves, which will be at least $``$ 0.05 mag for the optical passbands, increasing to $``$ 0.1 mag for the near-IR passbands \[Charlot, Worthey & Bressan 1996\]. However, the important point here is that the above method gives robust relative ages and metallicities. Essentially, this colour-based method gives some kind of constraint on the luminosity-weighted ratio of $``$ 2 Gyr old stellar populations to $``$ 5 Gyr old stellar populations, which we parameterise using the average age $`A`$. Therefore, the details of how we construct the older and younger stellar populations are reasonably unimportant. It is perfectly possible to assume an exponential SFH with an age of only 5 Gyr: if this is done, galaxies become younger (of course), but the relative ordering of galaxies by e.g. age is unaffected. Note that because the colours are luminosity weighted, small bursts of star formation may affect the relative ages and metallicities of galaxies (relative to their underlying ages and metallicities before the burst); however, it is unlikely that a large fraction of galaxies will be strongly affected by large amounts of recent star formation. The basic message is that absolute galaxy ages and metallicities are reasonably uncertain, but the relative trends are robust. Note that all of the results presented in this paper were obtained using the BC98 models, unless otherwise stated. ### 2.3 The dust models We do not use dust models in the SFH fitting; however, we use dust models in Figs. 26 and section 5.4 to allow us to quantify the effects that dust reddening may have on our results. We adopt the Milky Way (MW) and Small Magellanic Cloud (SMC) extinction curves and albedos from Gordon, Calzetti & Witt \[Gordon et al. 1997\]. In the top two panels in Figs. 26, we show a MW foreground screen model with a total $`V`$ band extinction $`A_V=0.3`$ mag (primarily to facilitate comparison with other studies). In the lower two panels, we show the reddening vector from a more realistic face-on Triplex dust model \[Evans 1994, Disney, Davies & Phillips 1989; DDP hereafter\]. In this model, the dust and stars are distributed smoothly in a vertically and horizontally exponential profile, with equal vertical and horizontal distributions. The two models shown have a reasonable central $`V`$ band optical depth in extinction, from pole to pole, of two. This value for the central optical depth in $`V`$ band is supported by a number of recent statistical studies into the global properties of dust in galaxies \[Peletier & Willner 1992, Huizinga 1994, Tully & Verheijen 1997, Tully et al. 1998, Kuchinski et al. 1998\]. However, this model does not take account of scattering. Monte Carlo simulations of Triplex-style galaxy models taking into account the effects of scattering indicate that because our sample is predominantly face-on, at least as many photons will be scattered into the line of sight as are scattered out of the line of sight \[dJiv\]. Therefore, we use the dust absorption curve to approximate the effects of more realistic distributions of dust on the colours of a galaxy. The use of the absorption curve is the main reason for the large difference in the MW and SMC Triplex model vectors: Gordon et al.’s \[Gordon et al. 1997\] MW dust albedo is much higher than their SMC dust albedo, leading to less absorption per unit pole to pole extinction. Note also that the Triplex dust reddening vector produces much more optical reddening than e.g. those in dJiv: this is due to our use of more recent near-IR dust albedos, which are much larger than e.g. those of Bruzual, Magris & Calvet \[Bruzual, Magris & Calvet 1988\]. ## 3 Maximum-likelihood fitting Before we describe the maximum-likelihood fitting method in detail, it is important to understand the nature of the error budget in an individual galaxy colour profile. There are three main contributions to the galaxy photometric errors in each passband (note that because of the use of large radial bins, shot noise errors are negligible compared to these three sources of error). 1. Zero-point calibration errors $`\delta \mu _{cal}`$ affect the whole galaxy colour profile systematically by applying a constant offset to the colour profiles. 2. In the optical passbands, flat fielding errors $`\delta \mu _{ff}`$ affect the galaxy colour profile randomly as a function of radius. Because of the use of large radial bins in the construction of the colours, the dominant contribution to the flat fielding uncertainty is the flat field structure over the largest scales. Because the flat fielding structure is primarily large scale, it is fair to reduce the flat fielding error by a factor of $``$ 10 as one radial bin will only cover typically one tenth or less of the whole frame. We assume normally-distributed flat fielding errors with a $`\sigma `$ of 0.5/10 per cent of the galaxy and sky level in the optical (0.5 per cent was chosen as a representative, if not slightly generous, flat field error over large scales). In the near-IR, sky subtraction to a large extent cancels out any flat fielding uncertainties: we, therefore, do not include the effect of flat fielding uncertainties in the error budget for near-IR passbands. 3. Sky level estimation errors $`\delta \mu _{sky}`$ affect the shape of the colour profile in a systematic way. If the assumed sky level is too high, the galaxy profile ‘cuts off’ too early, making the galaxy appear smaller and fainter than it is, whereas if the sky level is too low the galaxy profile ‘flattens off’ and the galaxy appears larger and brighter than it actually is. As long as the final estimated errors in a given annulus due to sky variations are reasonably small ($`0.3`$ mag) this error $`\delta \mu _{sky}`$ is related to the sky level error $`\delta \mu _s`$ (in magnitudes) by: $$\delta \mu _{sky}\delta \mu _s10^{0.4(\mu \mu _s)},$$ (4) where $`\mu `$ is the average surface brightness of the galaxy in that annulus and $`\mu _s`$ is the sky surface brightness. These photometric errors have different contributions to the total error budget at different radii, and have different effects on the ages, metallicities, and their gradients. If a single annulus is treated on its own, the total error estimate for each passband $`\delta \mu _{tot}`$ is given by: $$\delta \mu _{tot}=(\delta \mu _{cal}^2+\delta \mu _{ff}^2+\delta \mu _{sky}^2)^{1/2}.$$ (5) We use magnitude errors instead of flux errors in constructing these error estimates: this simplifies the analysis greatly, and is more than accurate enough for our purposes. We fit the SPS models to each set of radially-resolved galaxy colours using a maximum-likelihood approach. Note that we do not include the effects of dust in the maximum-likelihood fit. * We generate a finely-spaced model grid by calculating the SPS models for a fine grid of $`\tau `$ values. These finely spaced $`\tau `$ grids are then interpolated along the metallicity using a cubic spline between the model metallicities. These models then provide a set of arbitrary normalisation model magnitudes $`\mu _{\mathrm{model},i}(A,Z)`$ for each passband $`i`$ for a range of SFHs and metallicities. * We read in the multi-colour galaxy surface brightnesses as a function of radius, treating each annulus independently (each annulus has surface brightness $`\mu _{\mathrm{obs},i}`$). The total error estimate $`\delta \mu _{tot,i}`$ for each annulus is determined for each passband $`i`$. * For each $`A`$ and metallicity $`Z`$, it is then possible to determine the best normalisation between the model and annulus colours $`\mu _c`$ using: $$\mu _c=\frac{_{i=1}^n\{\mu _{\mathrm{obs},i}\mu _{\mathrm{model},i}(A,Z)\}/\delta \mu _{tot,i}^2}{_{i=1}^n1/\delta \mu _{tot,i}^2},$$ (6) where $`n`$ is the number of passbands and its corresponding $`\chi ^2`$ is given by: $$\chi ^2=\frac{1}{n1}\underset{i=1}{\overset{n}{}}\frac{(\mu _{\mathrm{obs},i}\mu _{\mathrm{model},i}(A,Z)\mu _c)^2}{\delta \mu _{tot,i}^2}.$$ (7) The best model $`(A,Z)`$ match is the one with the minimum $`\chi ^2`$ value. This procedure is then carried out for all of the galaxy annuli. Note that minimising $`\chi ^2`$, strictly speaking, requires the errors to be Gaussian; however, using magnitude errors $``$ 0.3 mag does not lead to significant errors in $`\chi ^2`$. ### 3.1 Estimating age and metallicity gradients In order to gain insight into age and metallicity trends as a function of radius, we perform a weighted linear fit to the ages and metallicities of each galaxy. This fit is parameterised by a gradient per disc scale length and an intercept at the disc half light radius. Each point entering the fit is weighted by the total age or metallicity error in each annulus, determined using $`\mathrm{\Delta }\chi ^2=1`$ intervals for the age and metallicity parameters individually (i.e. the age or metallicity range for which the reduced $`\chi ^2`$ is smaller than $`\chi _{\mathrm{best}\mathrm{fit}}^2+1`$). The first two, three or four bins of the radially-resolved ages and metallicities are used in the fit. Galaxies with only central colours are omitted from the sample, and galaxies with more than four bins are fit only out to the fourth bin. Including the fifth, sixth and seventh bins in the gradient fits does not significantly affect our results, but can increase the noise in the gradient determinations. As two out of the three sources of photometric error are correlated over the galaxy, it is impossible to use the $`\mathrm{\Delta }\chi ^2=1`$ contours for each annulus to estimate e.g. the errors in age and metallicity gradients. This is because to a large extent calibration uncertainties will not affect the age or metallicity gradient; however, the errors for the age or metallicity intercept will depend sensitively on the size of calibration error. We, therefore, produce error estimates using a Monte Carlo approach. For each galaxy, we repeat the above SFH derivation and analysis 100 times, applying a normally distributed random calibration, flat fielding and sky level errors. This approach has the virtue that the different types of photometric error act on the galaxy profile as they should, allowing valid assessment of the accuracy of any trends in age or metallicity with radius. An example of the ages and metallicities derived with this approach for the galaxy UGC 3080 is presented in Fig. 1. The solid line indicates the best-fit model for the galaxy colours as a function of radius, and the dotted lines the Monte Carlo simulations with realistic (or perhaps slightly conservative) photometric errors. Note that the errors in age and metallicity increase substantially with radius. This is primarily due to the sky subtraction errors, as their effect increases as the surface brightness becomes fainter with radius. Note that because we use a weighted fit, the age and metallicity gradients are robust to the largest of the sky subtraction errors in the outer parts. The error in any one parameter, e.g. metallicity, in any one annulus is given by the half of the interval containing 68 per cent of the e.g. metallicities in that annulus (the one sigma confidence interval). Errors in the age or metallicity gradients and intercepts are determined by fitting the 100 Monte Carlo simulated datasets in the same way as the real data. As above, the errors in each of these fit parameters are determined by working out the one sigma confidence interval of the simulations for each fit parameter alone. ## 4 Results ### 4.1 Colour-colour plots In order to better appreciate the characteristics of the data, the model uncertainties and the potential effects of dust reddening on the results, it is useful to consider some colour-colour plots in some detail. In Figs. 2 through 6 we show $`BR`$ against $`RK`$ colours as a function of radius for our sample galaxies. Central colours are denoted by solid circles (for the sample from dJiv), open circles (for the sample from Paper i) or stars (for the sample from TVPHW). Overplotted is the SPS model of KA97 in the upper right panel; in the remaining panels we plot BC98’s model. Note that the lower right panel is labelled with the average age of the stellar population $`A`$; the remaining panels are labelled with the star formation timescale $`\tau `$. In the upper panels we show a dust reddening vector assuming that dust is distributed in a foreground screen, and in the lower panels we show two absorption reddening vectors from the analytical treatment of DDP and Evans \[Evans 1994\]. These figures clearly reiterate the conclusions of Paper i and dJiv, where it was found that colour gradients were common in all types of spiral discs. Furthermore, these colour gradients are, for the most part, consistent with gradients in mean stellar age. The effects of dust reddening may also play a rôle; however, dust is unlikely to produce such a large colour gradient: see section 5.4 and e.g. Kuchinski et al. \[Kuchinski et al. 1998\] for a more detailed justification. A notable exception to this trend are the relatively faint S0 galaxies from the sample of TVPHW (see Fig. 2). These galaxies show a strong metallicity gradient, and an inverse age gradient, in that the outer regions of the galaxy appear older and more metal poor than the younger and more metal rich central regions of the galaxy. This conclusion is consistent with the results from studies of line strength gradients: Kuntschner \[Kuntschner 1998\] finds that many of the fainter S0s in his sample have a relatively young and metal rich nucleus, with older and more metal poor outer regions. Also, in Fig. 2, we find that there is a type dependence in the galaxy age and metallicity, in the sense that later type galaxies are both younger and more metal poor than their earlier type counterparts. This is partially consistent with dJiv, who finds that later type spirals have predominantly lower stellar metallicity (but he finds no significant trends in mean age with galaxy type). In Figs. 3 to 6 we explore the differences in SFH as a function of the physical parameters of our sample galaxies. Figure 5 suggests there is little correlation between galaxy scale length and SFH: there may be a weak correlation in the sense that there are very few young and metal poor large scale length galaxies. In contrast, when taken together, Figs. 3, 4 and 6 suggest that there are real correlations between the SFHs of spiral galaxies (as probed by the ages and metallicities) and the magnitudes, central surface brightnesses and gas fractions of galaxies in the sense that brighter, higher surface brightness galaxies with lower gas fractions tend to be older and more metal rich than fainter, lower surface brightness galaxies with larger gas fractions. Later, we will see these trends more clearly in terms of average ages and metallicities, but it is useful to bear in mind these colour-colour plots when considering the trends presented in the remainder of this section. ### 4.2 Local ages and metallicities We investigate the relation between the average ages and metallicities inferred from the colours of each galaxy annulus and the average $`K`$ band surface brightness in that annulus in Fig. 7. Representative error bars from the Monte Carlo simulations are shown in the lower left hand corner of the plots. Strong, statistically significant correlations between local age and the $`K`$ band surface brightness, and between local metallicity and the $`K`$ band surface brightness are found. While the scatter is considerable, the existence of such a strong relationship between local $`K`$ band surface brightness and the age and metallicity in that region is remarkable. In order to probe the dependence of the SFH on the structural parameters of our sample, we have carried out unweighted least-squares fits of these and subsequent correlations: the results of these fits are outlined in Table 1. In Table 1 we give the coefficients of the unweighted least-squares fits to the correlations significant at the 99 per cent level (where the significances are determined from a Spearman rank order test), along with errors derived from bootstrap resampling of the data \[Press et al. 1986\]. We will address these correlations further in the next section and in the discussion (section 5), where we attempt to relate these strong local correlations with global correlations. ### 4.3 Global relations In addition to the individual annulus age and metallicity estimates, we calculated age and metallicity best-fit gradients and intercepts (as described in section 3.1). These fit parameters are useful, in that they allow us to probe the global relationships between e.g. magnitude and SFH. These fits to an individual galaxy are parameterised by their gradient (in terms of the inverse $`K`$ band scale lengths) and their intercept (expressed as the intercept at the disc half light radius $`R_{eff}`$). We explore correlations between the structural parameters and the radial age and metallicity fit parameters in Figs. 811. #### 4.3.1 Global ages In Fig. 8, we see how the age intercept at the half light radius relates to the $`K`$ band surface brightness, $`K`$ band absolute magnitude, $`K`$ band disc scale length and gas fraction. As expected from the strong correlation between the local surface brightness and age, there is a highly significant correlation between the $`K`$ band central surface brightness and age. We see also that there are also highly significant correlations between the $`K`$ band absolute magnitude and age, and the gas fraction and age. There is no significant correlation between age and $`K`$ band disc scale length. An important question to ask at this stage is: are these correlations related to each other in some way? We investigate this in Fig. 12, where we plot some of the physical parameters against each other. We see that the $`K`$ band absolute magnitude of this sample is correlated (with large scatter) with the $`K`$ band central surface brightness. We also see in Fig. 12 that the brightest galaxies all have the largest scale lengths. In addition, both magnitude and surface brightness correlate strongly with the gas fraction. From these correlations, we can see that all of the three correlations between age and central surface brightness, magnitude and gas fraction may have a common origin: the correlation between these three physical parameters means that it is difficult, using Fig. 8 alone, to determine which parameters age is sensitive to. We do feel, however, that the correlations between age and surface brightness and age and magnitude do not stem from the correlation between age and gas fraction for two reasons. Firstly, the scatter in the age–gas fraction relationship is comparable to or even larger than the age–surface brightness and age–magnitude correlations. Secondly, the gas fraction is an indication of the total amount of star formation over the history of a galaxy (assuming that there is little late gas infall), implying that the gas fraction is quite dependent on the SFH. Therefore, correlations between SFH and gas fraction are likely to be driven by this effect, rather than indicating that gas fraction drives SFH. To summarise Figs. 8 and 12, the age of a galaxy is correlated strongly with the $`K`$ band surface brightness, $`K`$ band absolute magnitude and the gas fraction. We discuss these important correlations further in section 5.1. #### 4.3.2 Global metallicities In Fig. 9, we see how the metallicity intercept at the disc half light radius relates to the $`K`$ band surface brightness, $`K`$ band absolute magnitude, $`K`$ band disc scale length and gas fraction. The trends shown in Fig. 9 are similar to those seen in Fig. 8: this demonstrates the close correlation between the age and metallicity of a galaxy, in the sense that older galaxies are typically much more metal rich than their younger brethren. However, there are some important differences between the two sets of correlations. Firstly, although the correlation is not statistically significant, there is a conspicuous lack of galaxies with large scale lengths and low metallicities. This lack of large, metal poor galaxies can be understood from the relationship between magnitude and scale length in Fig. 12: large galaxies, because of their size, are bright by default (even if they have low surface brightnesses; Paper i) and have near-solar metallicities. Secondly, there seems to be a kind of ‘saturation’ in the stellar metallicities that was not apparent for the mean ages. It is particularly apparent in the correlation between the metallicity and $`K`$ band absolute magnitude: the metallicity of galaxies with an absolute $`K`$ band magnitude of $`22`$ is very similar to the metallicity of galaxies that are nearly 40 times brighter, with an absolute $`K`$ band magnitude of $`26`$. The metallicity of galaxies fainter than an absolute $`K`$ band magnitude of $`22`$ can be much lower, to the point where the metallicities are too low to be included reliably on the stellar population model grid. This ‘saturation’ is easily understood in terms of the gas fractions: the near-solar metallicity galaxies tend to have gas fractions $``$ 0.5. In this case, assuming a closed box model of chemical evolution (i.e. no gas flows into or out of the galaxy, and the metals in the galaxy are well-mixed), the stellar metallicity will be within 0.5 dex of the yield (see Pagel 1998 and references therein), which in this case would indicate a yield around, or just above, solar metallicity. This case is shown in the lower right panel of Fig. 9 (also Fig. 16), where we show the stellar metallicity of a solar metallicity yield closed box model against the gas fraction. Note that the gas metallicity in the closed box model continues to rise to very low gas fractions: this will become important later when we compare the stellar metallicity–luminosity relation from this work with the gas metallicity–luminosity relation (section 5.5). #### 4.3.3 Age gradients In Fig. 10, we see how the age gradient (per $`K`$ band disc scale length) relates to the $`K`$ band surface brightness, $`K`$ band absolute magnitude, $`K`$ band disc scale length and gas fraction. An important first result is that, on average, we have a strong detection of an overall age gradient: the average age gradient per $`K`$ band disc scale length is $`0.79\pm 0.08`$ Gyr $`h^1`$ (10$`\sigma `$; where the quoted error is the error in the mean gradient). We see that the age gradient does not correlate with the $`K`$ band absolute magnitude. However, there are statistically significant correlations between the age gradient and $`K`$ band central surface brightness, $`K`$ band disc scale length and gas fraction. Smaller galaxies, higher surface brightness galaxies and low gas fraction galaxies all tend to have relatively flat age gradients (with much scatter, that almost exceeds the amplitude of the trend). One possibility is that the S0s with ‘inverse’ age gradients (in the sense that their central regions were younger than their outer regions) are producing these trends, as the S0s in this sample are relatively faint (therefore not producing much of a trend in age gradient with magnitude) but have high surface brightness, small sizes and low gas fractions (therefore producing trends in all of the other physical parameters). We investigate this possibility in Fig. 13. Here we see that the trend is not simply due to ‘contamination’ from a small population of S0s: S0s contribute to a more general trend of decreasing age gradients for earlier galaxy types. Therefore, this decrease in age gradient is a real effect. We address a possible cause for this phenomenon later in section 5.6. #### 4.3.4 Metallicity gradients In Fig. 11, we see how the metallicity gradient (per $`K`$ band disc scale length) relates to the $`K`$ band surface brightness, $`K`$ band absolute magnitude, $`K`$ band disc scale length and gas fraction. The metallicity gradient is fairly poorly determined because of its typically smaller amplitude, and sensitivity to the rather noisier near-IR colours. Also, because of its smaller amplitude, metallicity gradients are more susceptible to the effects of dust reddening than were the rather larger age gradients. On average (assuming that the effects of dust are negligible), we have detected an overall metallicity gradient (although with much scatter): the average metallicity gradient per $`K`$ band disc scale length is $`0.14\pm 0.02`$ dex $`h^1`$ (7$`\sigma `$; where again the quoted error is the error in the mean gradient). Due to the large observational scatter, however, we have failed to detect any trends in metallicity gradient with galaxy properties. ## 5 Discussion ### 5.1 Surface brightness vs. magnitude In the previous section, we presented a wide range of correlations between diagnostics of the SFH and the physical parameters that describe a spiral galaxy. Our main finding was that the SFH of a galaxy correlates well with both the $`K`$ band absolute magnitude and $`K`$ band surface brightness. However, this leaves us in an uncomfortable position: because of the surface brightness–magnitude correlation in our dataset (Fig. 12), it is impossible to tell from Figs. 8, 9 and 12 alone what parameter is more important in determining the SFH of a galaxy. One straightforward way to investigate which parameter is more important is by checking if the offset of a galaxy from e.g. the age–magnitude correlation of Fig. 8 correlates with surface brightness. In the upper panels of Fig. 14, we show the residuals from the age–surface brightness correlation and the metallicity–surface brightness correlation against magnitude. Age residual does not significantly correlate with magnitude (the correlation is significant at only the 97 per cent level and is not shown). Metallicity residuals are significantly correlated with the magnitude, where brighter galaxies tend to be more metal rich than expected from their surface brightness alone. In the lower panels of Fig. 14, we show the residuals from the age–magnitude correlation and the metallicity–magnitude correlation. In contrast to the upper panels of this figure, there are highly significant correlations between the age residual and surface brightness, and the metallicity residual and surface brightness. Another way to consider the above is to investigate the distribution of galaxies in the three dimensional age–magnitude–surface brightness and metallicity–magnitude–surface brightness spaces. Unweighted least-squares fits of the age and metallicity as a function of surface brightness and magnitude yield the surfaces: $`A_{eff}`$ $`=`$ $`6.24(\pm 0.14)0.71(\pm 0.06)(\mu _{K,0}20)`$ $`0.15(\pm 0.03)(M_K+20)`$ $`\mathrm{log}_{10}(Z_{eff}/Z_{\mathrm{}})`$ $`=`$ $`0.99(\pm 0.05)`$ $`0.17(\pm 0.01)(\mu _{K,0}20)`$ $`0.10(\pm 0.01)(M_K+20),`$ where the quoted errors (in brackets) are derived using bootstrap resampling of the data and the intercept is defined at $`\mu _{K,0}=20`$ mag arcsec<sup>-2</sup> and $`M_K=20`$. A best fit line to the dataset is shown in Figs. 8 and 9. This best fit line is defined by the intersection of the best-fit plane with the least-squares bisector fit (where each variable is treated equally in the fit; Isobe et al. 1990) of the magnitude–surface brightness correlation (Fig. 12): $`M_K=19.75+1.88(\mu _{K,0}20)`$. No plots of the distribution of galaxies in these three-dimensional spaces are shown: the two-dimensional projections of this space in Figs. 8, 9 and 12 are among the best projections of this three-dimensional space. From the above fits, we can see quite clearly that age is primarily sensitive to central surface brightness: the change in average age per magnitude change in central surface brightness is much larger than change in age per magnitude change in luminosity. The stellar metallicity of a galaxy is sensitive to both surface brightness and magnitude: this is clear from the fairly comparable change in metallicity per magnitude change in surface brightness or luminosity. We have also analysed the residuals from the age–local $`K`$ band surface brightness and metallicity–local $`K`$ band surface brightness correlations as a function of central $`K`$ band surface brightness and $`K`$ band absolute magnitude (Fig. 7). The age of an annulus in a galaxy primarily correlates with its local $`K`$ band surface brightness and correlates to a lesser extent with the galaxy’s $`K`$ band central surface brightness and magnitude. The metallicity of a galaxy annulus correlates equally well with both the local $`K`$ band surface brightness and total galaxy magnitude and only very weakly with the central $`K`$ band surface brightness. Thus, the results from the analysis of local ages and metallicities are consistent with the analysis of the global correlations: much (but not all) of the variation that was attributed to the central surface brightness in the global analysis is driven by the local surface brightness in the local analysis. However, note that it is impossible to tell whether it is the global correlations that drive the local correlations, or vice versa. The local $`K`$ band surface brightnesses (obviously) correlate with the central $`K`$ band surface brightnesses, meaning that it is practically impossible to disentangle the effects of local and global surface brightnesses. However, there must be other factors at play in determining the SFH of spiral galaxies: the local and global age and metallicity residuals (once all the above trends have been removed) show a scatter around 1.4 times larger than the (typically quite generous) observational errors, indicating cosmic scatter equivalent to $``$ 1 Gyr in age and $``$ 0.2 dex in metallicity. This scatter is intrinsic to the galaxies themselves and must be explained in any model that hopes to accurately describe the driving forces behind the SFHs of spiral galaxies. However, note that some or all of this scatter may be due to the influence of small bursts of star formation on the colours of galaxies. To summarise, $`K`$ band surface brightness (in either its local or global form) is the most important ‘predictor’ of the the SFH of a galaxy: the effects of $`K`$ band absolute magnitude modulate the overall trend defined by surface brightness. Furthermore, it is apparent that the metallicity of a galaxy depends more sensitively on its $`K`$ band magnitude than does the age: this point is discussed later in section 5.6. On top of these overall trends, there is an intrinsic scatter, indicating that the surface brightness and magnitude cannot be the only parameters describing the SFH of a galaxy. ### 5.2 Star formation histories in terms of masses and densities In the above, we have seen how age and metallicity vary as a function of $`K`$ band surface brightness and $`K`$ band absolute magnitude. $`K`$ band stellar mass to light ratios are expected to be relatively insensitive to differences in stellar populations (compared to optical passbands; e.g. dJiv). Therefore, the variation of SFH with $`K`$ band surface brightness and absolute magnitude is likely to approximately represent variations in SFH with stellar surface density and stellar mass (especially over the dynamic range in surface brightness and magnitude considered in this paper). However, these trends with $`K`$ band surface brightness and $`K`$ band absolute magnitude may not accurately represent the variation of SFH with total (baryonic) surface density or mass: in order to take that into account, we must somehow account for the gas fraction of the galaxy. Accordingly, we have constructed ‘modified’ $`K`$ band surface brightnesses and magnitudes by adding $`2.5\mathrm{log}_{10}(1f_g)`$. This correction, in essence, converts all of the gas fraction (both the measured atomic gas fraction, and the much more uncertain molecular gas fraction) into stars with a constant stellar mass to light ratio in $`K`$ band of 0.6 $`M_{\mathrm{}}/L_{\mathrm{}}`$. In this correction, we make two main assumptions. Firstly, we assume a constant $`K`$ band stellar mass to light ratio of 0.6 $`M_{\mathrm{}}/L_{\mathrm{}}`$. This might not be such an inaccurate assumption: the $`K`$ band mass to light ratio is expected to be relatively robust to the presence of young stellar populations; however, our assumption of a constant $`K`$ band mass to light ratio is still a crude assumption and should be treated with caution. Note, however, that the relative trends in Fig. 15 are quite robust to changes in stellar mass to light ratio: as the stellar mass to light ratio increases, the modified magnitudes and surface brightnesses creep closer to their unmodified values asymptotically. Secondly, the correction to the surface brightness implicitly assumes that the gas will turn into stars with the same spatial distribution as the present-day stellar component. This is quite a poor assumption as the gas distribution is usually much more extended than the stellar distribution: this would imply that the corrections to the central surface brightnesses are likely to be overestimates. However, our purpose here is not to construct accurate measures of the baryonic masses and densities of disc galaxies; our purpose is merely to try to construct some kind of representative mass and density which will allow us to determine if the trends in SFH with $`K`$ band magnitude and surface brightness reflect an underlying, more physically meaningful trend in SFH with mass and density. In Fig. 15 we show the trends in age at the disc half light radius (left hand panels) and metallicity at the disc half light radius (right hand panels) with modified $`K`$ band central surface brightness (upper panels) and modified $`K`$ band magnitude (lower panels). It is clear that the trends in SFH with $`K`$ band surface brightness and absolute magnitude presented in Figs. 8 and 9 represent an underlying trend in SFH with the total baryonic galaxy densities and masses. The best fit slopes are typically slightly steeper than those for the unmodified magnitudes and surface brightnesses (Table 1): because low surface brightness and/or faint galaxies are gas-rich, correcting them for the contributions from their gas fractions tends to steepen the correlations somewhat. We have also attempted to disentangle surface density and mass dependencies in the SFH using the method described above in section 5.1: again, we find that surface density is the dominant parameter in determining the SFH of a galaxy, and that the mass of a galaxy has a secondary, modulating effect on the SFH. ### 5.3 Does the choice of IMF and SPS model affect our conclusions? We have derived the above conclusions using a Salpeter \[Salpeter 1955\] IMF and the BC98 models, but are our conclusions affected by SPS model details such as IMF, or the choice of model? We address this possible source of systematic uncertainty in Table 2, where we compare the results for the correlation between the $`K`$ band central surface brightness $`\mu _{K,0}`$ and age intercept at the disc half light radius $`A_{eff}`$ using the models of BC98 with the IMF adopted by Kennicutt \[Kennicutt 1983\] and the models of KA97 with a Salpeter IMF. From Table 2, it is apparent that our results are quite robust to changes in both the stellar IMF and SPS model used to interpret the data. In none of the cases does the significance of the correlation vary by a significant amount, and the slope and intercept of the correlation varies within its estimated bootstrap resampling errors. This correlation is not exceptional; other correlations show similar behaviour, with little variation for plausible changes in IMF and SPS model. In conclusion, while plausible changes in the IMF and SPS model will change the details of the correlations (e.g. the gradients, intercepts, significances, etc.), the existence of the correlations themselves is very robust to changes in the models used to derive the SFHs. ### 5.4 How important is dust extinction? Our results account for average age and metallicity effects only; however, from Figs. 26 it is clear that dust reddening can cause colour changes similar to those caused by age or metallicity. In this section, we discuss the colour changes caused by dust reddening. We conclude that dust reddening will mainly affect the metallicity (and to a lesser extent age) gradients: however, the magnitude of dust effects is likely to be too small to significantly affect our conclusions. In Figs. 26, we show the reddening vectors for two different dust models (Milky Way and Small Magellanic Cloud dust models) and for two different reddening geometries (a simple foreground screen model and exponential star and dust disc model). We can see that the reddening effects of the foreground screen (for a total $`V`$ band extinction of 0.3 mag) are qualitatively similar (despite its unphysical dust geometry) to the central $`V`$ band absorption of a $`\tau _V=2`$ Triplex dust model. For the Triplex model, the length of the vector shows the colour difference between the central (filled circle) and outer colours, and the open circle denotes the colour effects at the disc half light radius. How realistic are these Triplex vectors? dJiv compares absorption-only face-on Triplex dust reddening vectors with the results of Monte Carlo simulations, including the effects of scattering, concluding that the Triplex model vectors are quite accurate for high optical depths but that they considerably overestimate the reddening for lower optical depths (i.e. $`\tau _V2`$). However, both these models include only the effects of smoothly distributed dust. If a fraction of the dust is distributed in small, dense clumps, the reddening produced by a given overall dust mass decreases even more: the dense clumps of dust tend to simply ‘drill holes’ in the face-on galaxy light distribution, producing very little reddening per unit dust mass \[Huizinga 1994, dJiv, Kuchinski et al. 1998\]. The bottom line is that in such a situation, the galaxy colours are dominated by the least obscured stars, with the dense dust clouds having little effect on the overall colour. Therefore, the Triplex model vectors in Figs. 26 are arguably overestimates of the likely effects of dust reddening on our data. From Figs. 26, we can see that the main effect of dust reddening would be the production of a colour gradient which would mimic small artificial age and metallicity gradients. Note, however, that the amplitudes of the majority of the observed colour gradients are larger than the Triplex model vectors. In addition, we have checked for trends in age and metallicity gradient with galaxy ellipticity: no significant trend was found, suggesting that age and metallicity trends are unlikely to be solely produced by dust. Coupled with the above arguments, this strongly suggests that most of the colour gradient of a given galaxy is likely to be due to stellar population differences. This is consistent with the findings of Kuchinski et al. \[Kuchinski et al. 1998\], who found, using more realistic dust models tuned to reproduce accurately the colour trends in high-inclination galaxies, that the colour gradients in face-on galaxies were likely to be due primarily to changes in the underlying stellar populations with radius. Therefore, our measurements of the age and metallicity gradients are likely to be qualitatively correct, but trends in dust extinction with e.g. magnitude or surface brightness may cause (or hide) weak trends in the gradients. The age and metallicity intercepts at the half light radius would be relatively unaffected: in particular, differences between the central $`V`$ band optical depths of 10 or more cannot produce trends in the ages or metallicities with anywhere near the dynamic range observed in the data. Therefore, the main conclusion of this paper, that the SFH of a spiral galaxy is primarily driven by surface density and is modulated by mass, is robust to the effects of dust reddening. ### 5.5 Comparison with H ii region metallicities In Fig. 16, we plot our colour-based stellar metallicities at the disc half light radius against measures of the global gas metallicity via H ii region spectroscopy. $`B`$ band magnitudes were transformed into $`K`$ band magnitudes using an average $`BK`$ colour of $`3.4\pm 0.4`$ \[dJiv\]. For bright galaxies, we have plotted metallicity determinations from Vila-Costas & Edmunds \[Vila-Costas & Edmunds 1992\] (diamonds) and Zaritsky et al. \[Zaritsky, Kennicutt & Huchra 1994\] (diagonal crosses) at a fixed radius of 3 kpc. To increase our dynamic range in terms of galaxy magnitude, we have added global gas metallicity measures from the studies of Skillman et al. \[Skillman, Kennicutt & Hodge 1989\] and van Zee et al. \[van Zee et al. 1997\]. Measurements for the same galaxies in different studies are connected by solid lines. From Fig. 16, it is clear that our colour-based stellar metallicities are in broad agreement with the trends in gas metallicity with magnitude explored by the above H ii region studies. However, there are two notable differences between the colour-based stellar metallicities and the H ii region metallicities. Firstly, there is a ‘saturation’ in stellar metallicity at bright magnitudes not seen in the gas metallicities, which continue to rise to the brightest magnitudes. In section 4.3.2, we argued that this ‘saturation’ was due to the slow variation of stellar metallicity with gas fraction at gas fractions lower than $``$ 1/2. In Fig. 16 we test this idea. The dashed line is the gas metallicity–magnitude relation expected if galaxies evolve as closed boxes with solar metallicity yield, converting between gas fraction and magnitude using the by-eye fit to the magnitude–gas fraction correlation $`f_g=0.8+0.14(M_K20)`$ (where the gas fraction $`f_g`$ is not allowed to drop below 0.05 or rise above 0.95; see Fig. 12). The solid line is the corresponding relation for the stellar metallicity. Note that at $`K`$ band absolute magnitudes brighter than $``$25 or fainter than $``$19 the model metallicity–magnitude relation is poorly constrained: the gas fractions at the bright and faint end asymptotically approach zero and unity respectively. The closed box model indicates that our interpretation of the offset between gas and stellar metallicities at the brightest magnitudes is essentially correct: the gas metallicity of galaxies is around 0.4 dex higher than the average stellar metallicity. Furthermore, the slope of the gas metallicity–magnitude relation is slightly steeper than the stellar metallicity–magnitude relation. However, the simple closed box model is far from perfect: this model underpredicts the stellar metallicity of spiral galaxies with $`K21`$. This may be a genuine shortcoming of the closed box model; however, note that we have crudely translated gas fraction into magnitude: the closed box model agrees with observations quite closely if we plot gas and stellar metallicity against gas fraction (Fig. 9). Secondly, there is a sharp drop in the estimated stellar metallicity at faint magnitudes that is not apparent in the gas metallicities. This drop, which occurs for stellar metallicities lower than $`1/10`$ solar, is unlikely to be physical: stellar and gas metallicities should be quite similar at high gas fractions (equivalent to faint magnitudes; Fig. 16). However, the SPS models are quite uncertain at such low metallicities, suggesting that the SPS models overpredict the near-IR magnitudes of very metal-poor composite stellar populations. We have also compared our derived metallicity gradients with the gas metallicity gradients from Vila-Costas & Edmunds \[Vila-Costas & Edmunds 1992\] and Zaritsky et al. \[Zaritsky, Kennicutt & Huchra 1994\]. Though our measurements have large scatter, we detect an average metallicity gradient of $`0.06\pm 0.01`$ kpc<sup>-1</sup> for the whole sample. This gradient is quite comparable to the average metallicity gradient from the studies Vila-Costas & Edmunds \[Vila-Costas & Edmunds 1992\] and Zaritsky et al. \[Zaritsky, Kennicutt & Huchra 1994\] of $`0.065\pm 0.007`$ kpc<sup>-1</sup>. Given the simplicity of the assumptions going into the colour-based analysis, and considerable SPS model and dust reddening uncertainties, we take the broad agreement between the gas and stellar metallicities as an important confirmation of the overall validity of the colour-based method. ### 5.6 A possible physical interpretation In order to demonstrate the utility of the trends presented in section 4 in investigating star formation laws and galaxy evolution, we consider a simple model of galaxy evolution. We have found that the surface density of a galaxy is the most important parameter in describing its SFH; therefore, we consider a simple model where the star formation and chemical enrichment history are controlled by the surface density of gas in a given region \[Schmidt 1959, Phillipps, Edmunds & Davies 1990, Phillipps & Edmunds 1991, Dopita & Ryder 1994, Elmegreen & Parravano 1994, Prantzos & Silk 1998, Kennicutt 1998\]. In our model, we assume that the initial state 12 Gyr ago is a thin layer of gas with a given surface density $`\sigma _g`$. We allow this layer of gas to form stars according to a Schmidt \[Schmidt 1959\] star formation law: $$\mathrm{\Psi }(t)=\frac{1}{\alpha }\frac{d\sigma _g(t)}{dt}=k\sigma _g(t)^n,$$ (8) where $`\mathrm{\Psi }(t)`$ is the SFR at a time $`t`$, $`\alpha `$ is the fraction of gas that is locked up in long-lived stars (we assume $`\alpha =0.7`$ hereafter; this value is appropriate for a Salpeter IMF), $`k`$ is the efficiency of star formation and the exponent $`n`$ determines how sensitively the star formation rate depends on the gas surface density \[Phillipps & Edmunds 1991\]. Integration of Equation 8 gives: $$\sigma _g(t)/\sigma _0=e^{\alpha kt},$$ (9) for $`n=1`$ and: $$\sigma _g(t)/\sigma _0=[(n1)\sigma _0^{n1}\alpha kt+1]^{1/1n},$$ (10) for $`n1`$ where $`\sigma _0`$ is the initial surface density of gas. Note that the star formation (and gas depletion) history of the $`n=1`$ case is independent of initial surface density. To be consistent with the trends in SFH with surface brightness presented in this paper $`n`$ must be larger than one. We allow no gas inflow or outflow from this region, i.e. our model is a closed box. In addition, in keeping with our definition of the star formation law above (where we allow a fraction of the gas consumed in star formation to be returned immediately, enriched with heavy elements), we use the instantaneous recycling approximation. While this clearly will introduce some inaccuracy into our model predictions, given the simplicity of this model a more sophisticated approach is not warranted. Using these approximations, it is possible to relate the stellar metallicity $`Z`$ to the gas fraction $`f_g`$ (e.g. Pagel & Pratchett 1975; Phillips & Edmunds 1991; Pagel 1998): $$Z(t)=p(1+\frac{f_g\mathrm{log}f_g}{1f_g}),$$ (11) where $`p`$ is the yield, in this case fixed at solar metallicity $`p=0.02`$. Numerical integration of the SFR from Equations 8, 9 and 10 allows determination of the average age of the stellar population $`A`$. In modelling the SFH in this way, we are attempting to describe the broad strokes of the trends in SFH observed in Fig. 7, using the variation in SFH caused by variations in initial surface density alone. In order to meaningfully compare the models to the data, however, it is necessary to translate the initial surface density into a surface brightness. This is achieved using the solar metallicity SPS models of BC98 to translate the model SFR into a $`K`$ band surface brightness. While the use of the solar metallicity model ignores the effects of metallicity in determining the $`K`$ band surface brightness, the uncertainties in the $`K`$ band model mass to light ratio are sufficiently large that tracking the metallicity evolution of the $`K`$ band surface brightness is unwarranted for our purposes. Using the above models, we tried to reproduce the trends in Fig. 7 by adjusting the values of $`n`$ and $`k`$. We found that reasonably good fits to both trends are easily achieved by balancing off changes in $`n`$ against compensating changes in $`k`$ over quite a broad range in $`n`$ and $`k`$. In order to improve this situation, we used the correlation between $`K`$ band central surface brightness and gas fraction as an additional constraint: in order to predict the relationship between central surface brightness and global gas fraction, we constructed the mass-weighted gas fraction for an exponential disc galaxy with the appropriate $`K`$ band central surface brightness. We found that models with $`n1.6`$ and $`k0.007`$ fit the observed trends well, with modest increases in $`n`$ being made possible by decreasing $`k`$, and vice versa. Plots of the fits to the age, metallicity and gas fraction trends with surface brightness are given in Fig. 17. The success of this simple, local density-dependent model in reproducing the broad trends observed in Fig. 17 is quite compelling, re-affirming our assertion that surface density plays a dominant rôle in driving the SFH and chemical enrichment history of spiral galaxies. This simple model also explains the origin of age and metallicity gradients in spiral galaxies: the local surface density in spiral galaxies decreases with radius, leading to younger ages and lower metallicities at larger radii. Indeed, the simple model even explains the trend in age gradient with surface brightness: high surface brightness galaxies will have a smaller age gradient per disc scale length because of the flatter slope of the (curved) age–surface brightness relation at high surface brightnesses. However, our simple model is clearly inadequate: there is no explicit mass dependence in this model, which is required by the data. This may be alleviated by the use of a different star formation law. Kennicutt \[Kennicutt 1998\] studied the correlation between global gas surface density and SFR, finding that a Schmidt law with exponent 1.5 or a star formation law which depends on both local gas surface density and the dynamical timescale (which depends primarily on the rotation curve shape, and, therefore, on mostly global parameters) explained the data equally well. There may also be a surface density threshold below which star formation cannot occur \[Kennicutt 1989\]. In addition, the fact that galaxy metallicity depends on magnitude and surface brightness in almost equal amounts (age is much more sensitive to surface brightness) suggests that e.g. galaxy mass-dependent feedback may be an important process in the chemical evolution of galaxies. Moreover, a closed box chemical evolution model is strongly disfavoured by e.g. studies of the metallicity distribution of stars in the Milky Way, where closed box models hugely overpredict the number of low-metallicity stars in the solar neighbourhood and other, nearby galaxies \[Worthey, Dorman & Jones 1996, Pagel 1998\]. This discrepancy can be solved by allowing gas to flow in and out of a region, and indeed this is expected in galaxy formation and evolution models set in the context of large scale structure formation \[Cole et al. 1994, Kauffmann & Charlot 1998, Valageas & Schaeffer 1999\]. ## 6 Conclusions We have used a diverse sample of low-inclination spiral galaxies with radially-resolved optical and near-IR photometry to investigate trends in SFH with radius, as a function of galaxy structural parameters. A maximum-likelihood analysis was performed, comparing SPS model colours with those of our sample galaxies, allowing use of all of the available colour information. Uncertainties in the assumed grid of SFHs, the SPS models and uncertainties due to dust reddening were not taken into account. Because of these uncertainties, the absolute ages and metallicities we derived may be inaccurate; however, our conclusions will be robust in a relative sense. In particular, dust will mainly affect the age and metallicity gradients; however, the majority of a given galaxy’s age or metallicity gradient is likely to be due to gradients in its stellar population alone. The global age and metallicity trends are robust to the effects of dust reddening. Our main conclusions are as follows. * Most spiral galaxies have stellar population gradients, in the sense that their inner regions are older and more metal rich than their outer regions. The amplitude of age gradients increase from high surface brightness to low surface brightness galaxies. An exception to this trend are faint S0s from the Ursa Major Cluster of galaxies: the central stellar populations of these galaxies are younger and more metal rich than the outer regions of these galaxies. * The stellar metallicity–magnitude relation ‘saturates’ for the brightest galaxies. This ‘saturation’ is a real effect: as the gas is depleted in the brightest galaxies, the gas metallicity tends to rise continually, while the stellar metallicity flattens off as the metallicity tends towards the yield. The colour-based metallicities of the faintest spirals fall considerably ($`1`$ dex) below the gas metallicity–luminosity relation: this may indicate that the SPS models overpredict the $`K`$ band luminosity of very low metallicity composite stellar populations. * There is a strong correlation between the SFH of a galaxy (as probed by its age, metallicity and gas fraction) and the $`K`$ band surface brightness and magnitude of that galaxy. From consideration of the distribution of galaxies in the age–magnitude–surface brightness and metallicity–magnitude–surface brightness spaces, we find that the SFH of a galaxy correlates primarily with either its local or global $`K`$ band surface brightness: the effects of $`K`$ band absolute magnitude are of secondary importance. * When the gas fraction is taken into account, the correlation between SFH and surface density remains, with a small amount of mass dependence. Motivated by the strong correlation between SFH and surface density, and by the correlation between age and local $`K`$ band surface brightness, we tested the observations against a closed box local density-dependent star formation law. We found that despite its simplicity, many of the correlations could be reproduced by this model, indicating that the local surface density is the most important parameter in shaping the SFH of a given region in a galaxy. A particularly significant shortcoming of this model is the lack of a magnitude dependence for the stellar metallicity: this magnitude dependence may indicate that mass-dependent feedback is an important process in shaping the chemical evolution of a galaxy. However, there is significant cosmic scatter in these correlations (some of which may be due to small bursts of recent star formation), suggesting that the mass and density of a galaxy may not be the only parameters affecting its SFH. ## Acknowledgements We wish to thank Richard Bower for his comments on early versions of the manuscript and Mike Edmunds, Harald Kuntschner and Bernard Rauscher for useful discussions. EFB would like to thank the Isle of Man Education Board for their generous funding. Support for RSdJ was provided by NASA through Hubble Fellowship grant #HF-01106.01-98A from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. This project made use of STARLINK facilities in Durham. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
no-problem/9909/solv-int9909015.html
ar5iv
text
# Discrete asymptotic nets and W-congruences in Plücker line geometry ## 1 Introduction The modern theory of integrable partial differential equations is closely related to the XIX century differential geometry as presented in monographs of Bianchi and Darboux . In that classical period many geometers studied ”interesting” classes of surfaces. A remarkable property of these surfaces (or more appropriate: coordinate systems on surfaces and submanifolds) is that they allow for transformations, which exhibit the so called permutability property. Such transformations called, depending on the context, the Darboux, Bianchi, Bäcklund, Laplace, Moutard, Combescure, Lévy, Ribaucour or fundamental transformations of Jonas, can be also described in terms of certain families of lines called line congruences . To give an example, the angle between the asymptotic directions on the pseudospherical surfaces in $`𝔼^3`$, when written as a function of the asymptotic coordinates satisfies the sine-Gordon equation. From this point of view the study of pseudospherical surfaces is, roughly speaking, equivalent to studying of the sine-Gordon equation and its solutions. The transformations of pseudospherical surfaces, introduced by Bianchi and Bäcklund, lead to the celebrated Bäcklund transformations of the sine-Gordon equation. At the end of XIX-th century it was also discovered that most of the ”interesting” submanifolds are provided by reductions of conjugate nets (see Section 2), and the transformations between such submanifolds are the corresponding reductions of the fundamental (or Jonas) transformations of conjugate nets. It is worth of mentioning that from the point of view of integrable systems the conjugate nets and their iso-conjugate deformations and transformations are described by the so called multicomponent Kadomtsev–Petviashvilii hierarchy . Apparently, asymptotic nets seem not to be directly related to conjugate nets. However, there exists an approach to asymptotic nets and their transformations (W–congruences) describing them as conjugate nets within the line geometry of Plücker; see Sections 3 and 4 for more details. In the soliton theory the discrete integrable systems are considered more fundamental then the corresponding differential systems . Discrete equations include the continuous theory as the result of a limiting procedure, moreover different limits can give from one discrete equation variuous differential ones. Furthermore, discrete equations reveal some symmetries lost in the continuous limit. During last few years the connection between geometry and integrability has been observed also at a discrete level. It turns out that the discrete analogs of pseudospherical surfaces were studied long time ago by Sauer; see and references therein. In connection with the Hirota discrete analog of the sine-Gordon equation these ”discrete pseudospherical surfaces” were investigated by Bobenko and Pinkall . In the book of Sauer one can find also other examples of discrete surfaces, or better $`^2`$ lattices in $`^3`$; in particular, he defined discrete asymptotic nets and discrete conjugate nets (consult also Sections 2 and 5). These definitions, not only have clear geometric meaning, but also provide the proper, from the point of view of integrability, discretizations of asymptotic and conjugate nets on surfaces. The importance of discrete conjugate nets in integrability theory was recognized in , where it was demonstrated that (the discrete analog of) the Laplace sequence of such lattices provides geometric interpretation of Hirota’s discretization of the two dimensional Toda system – one of the most important equations of the soliton theory and its applications. Soon after that Doliwa and Santini defined and studied the discrete analogs of multidimensional conjugate nets (multidimensional quadrilateral lattices). They also found that the corresponding equations were already known in the literature, being obtained by Bogdanov and Konopelchenko from the $`\overline{}`$ approach. The Darboux-type transformations of the quadrilateral lattices have been found by Mañas, Doliwa and Santini . The same authors also investigated in detail the geometry of these transformations ; in order to do that the theory of discrete congruences has been constructed as well. In recent literature one can find various examples of integrable discrete geometries (see, for example, and articles in ). It turns out that all the known, up to now, integrable lattices are special cases of asymptotic or quadrilateral lattices. For example, discrete pseudospherical surfaces investigated by Bobenko and Pinkall and discrete affine spheres considered by Bobenko and Schief are asymptotic lattices subjected to additional constraints. Given a physical system described by integrable partial differential equations, then one of first steps towards quantizing the model is to find its discrete version preserving the integrability properties (see, for example, and references therein). It turns out that often (see, for example, discussion in ) information coming from the quantum model arises naturally as a result of the solution of the classical discrete integrable equations. Some recent attempts to quantize the theory of gravity use approach of fluctuating geometries (see recent reviews ) based on the concept of discrete manifolds. However most of the research in this direction is done by computer simulations, therefore examples of lattice geometries described by integrable equations may be of some help in developing this program. Such integrable lattice geometries could be then studied using powerful tools of the soliton theory, such like the (quantum) inverse spectral transform, algebro-geometric methods of integration, etc. The connection of asymptotic nets with stationary axially symmetric solutions of the Einstein equations is well known in the literature (see, for example, ). Recently there was discovered by Schief an intriguing link between self-dual Einstein spaces and discrete affine spheres, which form an integrable subcase of discrete asymptotic nets. It is therefore reasonable to study integrability of general asymptotic lattices. The main results of this paper are contained Theorems 3 and 6, which incorporate the theory of asymptotic lattices and their transformations into the theory of quadrilateral lattices. These results are direct analogs of the above-mentioned approach to asymptotic nets in terms of conjugate nets in Plücker quadric. The direct proof of integrability of asymptotic lattices, which does not use the theory of quadrilateral lattices, is contained in permutability Theorems 8 and 9. More detailed description of results is given below. * Asymptotic lattices are represented in Plücker quadric by isotropic congruences. * The asymptotic tangents are represented by focal lattices of such congruences. * The Darboux-Bäcklund transformations of asymptotic lattices are provided by a discrete analog of W–congruences. * Discrete W–congruences can be constructed from the discrete Moutard transformation via the discrete analog of the Lelieuvre formulas introduced in . * The discrete W–congruences are represented in the Plücker quadric by quadrilateral lattices. * The discussed transformations of asymptotic lattices satisfy the permutability property. To make our exposition self-contained we first recall necessary results of the theory of conjugate nets and quadrilateral lattices (Section 2) and basic notions of the line geometry (Section 3). Section 4 is intended to motivate our investigations and contains a brief summary of the theory of asymptotic nets and W–congruences. In Section 5 we construct the theory of discrete asymptotic nets within the line geometry of Plücker. Section 6 provides a detailed exposition of the discrete W–congruences. Finally, in Section 7 we state and prove the permutability theorems for the Moutard transformation and for the corresponding $`W`$–transformation of asymptotic lattices. ## 2 Quadrilateral lattices and congruences In this Section we present basic result from the theory of conjugate nets and congruences , and their discrete generalizations . We give here only definitions necessary to understand results of this paper. In particular, we consider only two dimensional conjugate nets and lattices. ###### Definition 1. A coordinate system on a surface in $`^M`$, is called conjugate net if tangents to any parametric line transported in the second direction form a developable surface (see Fig. 1). This geometric characterization can be put into form of the Laplace equation satisfied by homogeneous coordinates $`𝒚(v_1,v_2)^{M+1}`$ of the net $$_1_2𝒚=a_1𝒚+b_2𝒚+c𝒚,$$ (1) here $`v_1`$, $`v_2`$ are the conjugate parameters, $`_i`$ denotes the partial derivative with respect to $`v_i`$, $`i=1,2`$, and $`a(v_1,v_2)`$, $`b(v_1,v_2)`$, $`c(v_1,v_2)`$ are functions of the conjugate parameters. Given conjugate net on a surface, it defines two new conjugate nets called the Laplace transforms of the old net; the transformations are provided by tangents to the parametric lines, see Fig. 1. The discrete version of conjugate net on a surface is given by two dimensional quadrilateral lattice (quadrilateral surface). ###### Definition 2. By quadrilateral surface we mean mapping of $`^2`$ in $`^M`$, such that its elementary quadrilaterals are planar (see Fig. 2). ###### Remark. Notice that tangents to any parametric discrete curve transported in the second direction form a discrete analog of a developable surface, i.e., one-parameter family of lines tangent to a (discrete) curve. This geometric characterization implies linear relation between homogeneous coordinates $`𝒚(m_1,m_2)^{M+1}`$ of four points of any elementary quadrilateral with vertices $`𝒚`$, $`T_1𝒚`$, $`T_2𝒚`$ and $`T_1T_2𝒚`$, where $`T_i`$ denotes shift operator along $`i`$-th direction of the lattice, $`i=1,2`$. Such a relation can be put into the form of the discrete Laplace equation $$\mathrm{\Delta }_1\mathrm{\Delta }_2𝒚=a\mathrm{\Delta }_1𝒚+b\mathrm{\Delta }_2𝒚+c𝒚,$$ (2) where $`\mathrm{\Delta }_i=T_i1`$, $`i=1,2`$, is the partial difference operator. Intersections of tangent lines define two new quadrilateral surfaces called the Laplace transforms of the old lattice. ###### Remark. Restriction from $`^M`$ to its affine part, and therefore from homogeneous coordinates to non-homogeneous ones, results in putting $`c=0`$ in equations (1)-(2). The tangents of the lattice are canonical examples of special two-parameter families of straight lines called discrete congruences. ###### Definition 3. $`^2`$-parameter family of lines in $`^M`$ is called two dimensional discrete congruence if any two neighbouring lines are coplanar. ###### Remark. Two neighbouring tangent lines $`[𝒚],T_i[𝒚]`$ and $`T_j^1[𝒚],T_i[𝒚]`$, $`ij`$, of the quadrilateral surface $`[𝒚(m_1,m_2)]`$ are coplanar and intersect giving $`_{ij}`$ the Laplace transform of the lattice (see Fig. 2). ###### Definition 4. Intersection points of lines of a discrete congruence with its nearest neighbours in the $`i`$-th direction form the $`i`$-th focal lattice of the congruence. One can show that focal lattices of two dimensional congruences are quadrilateral lattices. The Laplace transformation can be considered as correspondence between focal lattices of a congruence. Similar notions and results exist in the continuous context. ###### Definition 5. Two-parameter family of lines in $`^M`$ is called two dimensional congruence if through each line pass two developable surfaces consisting of lines of the family. One can show that the curves of regression of such developables form two focal surfaces tangent to the congruence, moreover the developables cut the focal surfaces along conjugate nets. ## 3 Line geometry and the Plücker quadric The interest of studying families of lines was motivated by the theory of optics, and such mathematicians like Monge, Malus and Hamilton began to create the general theory of rays. However it was Plücker, who first considered straight lines in $`^3`$ as primary elements; he also found a convenient way to parametrize the space of lines . The geometric interpretation of this parametrization was clarified later by Plücker’s pupil Klein and was one of non-trivial examples in his Erlangen program. We present in this Section basic notions and results of the line geometry, details can be found for example in The description of straight lines in $`^3`$ takes more symmetric form if we consider $`^3`$ as the affine part of the projective space $`^3`$ (by the standard embedding $`𝒚[(𝒚,1)^T]`$), and study straight lines in that space. Given two different points $`[𝒖]`$, $`[𝒗]`$ of $`^3`$, the line $`[𝒖],[𝒗]`$ passing through them can be represented, up to proportionality factor, by a bi-vector $$𝔭=𝒖𝒗\stackrel{2}{}(^4);$$ (3) changing the reference points of the line results in multiplying the bi-vector by the determinant of the transition matrix between their representatives. The space of straight lines in $`^3`$ can be therefore identified with a subset of $`\left(^2(^4)\right)^5`$; the necessary and sufficient condition for a non-zero bi-vector $`𝔭`$ in order to represent a straight line is given by the homogeneous equation $$𝔭𝔭=0,$$ (4) a simple consequence of (3). If $`𝒆_1,\mathrm{}𝒆_4`$ is a basis of $`^4`$ then the following bi-vectors $$𝒆_{i_1i_2}=𝒆_{i_1}𝒆_{i_2},1i_1<i_24,$$ form the corresponding basis of $`^2(^4)`$: $$𝔭=p^{12}𝒆_{12}+p^{13}𝒆_{13}+\mathrm{}+p^{34}𝒆_{34}.$$ Equation (4) rewritten in the Plücker (or Grassmann–Plücker) coordinates $`p^{ij}`$ reads $$p^{12}p^{34}p^{13}p^{24}+p^{14}p^{23}=0,$$ (5) and defines in $`^5`$ the so-called Plücker (or Plücker—Klein) quadric $`𝒬_P`$. Let us present basic subsets of the quadric $`𝒬_P`$ and corresponding configurations of lines in $`^3`$. 1. If two lines intersect then the corresponding bi-vectors $`𝔭_i`$, $`i=1,2`$, satisfy not only equations of the form (4), but also $$𝔭_1𝔭_2=0,$$ (6) i.e., the corresponding points $`[𝔭_1]`$, $`[𝔭_2]`$ of the Plücker quadric are joined by an isotropic (i.e., contained in $`𝒬_P`$) line. Therefore isotropic lines of $`^5`$ correspond to planar pencils of lines in $`^3`$. 2. A conic section of $`𝒬_P`$ by a non-isotropic plane represents the so called regulus, i.e., one family of lines of a ruled quadric in $`^3`$. ## 4 Asymptotic nets and W-congruences in line geometry We collect here, for Reader’s convenience, various results of the theory of asymptotic nets , which we consider necessary to understand the methods and goals of next sections where we treat the discrete case. ###### Definition 6. A coordinate system on a surface in $`^3`$ is called asymptotic parametrization if in each point of the surface the osculating planes of the parametric curves coincide with the tangent plane to the surface. ###### Remark. Through the paper we consider asymptotic parametrization on a surface in the projective space $`^3`$, but we perform calculations in its affine part $`^3`$. Given a surface $`𝒙(u_1,u_2)`$ in $`^3`$ in asymptotic coordinates $`u_1`$, $`u_2`$ then $`_1^2𝒙`$ $`=a_1_1𝒙+b_1_2𝒙,`$ (7) $`_2^2𝒙`$ $`=a_2_1𝒙+b_2_2𝒙.`$ (8) As a consequence of the compatibility condition $`_1^2_2^2𝒙=_2^2_1^2𝒙`$ we obtain that there exists a function $`\varphi (u_1,u_2)`$ such that $$a_1=_1\varphi ,b_2=_2\varphi .$$ The tangents to the asymptotic lines are represented, in the appropriate gauge, by the bi-vectors $$𝔭_1=e^\varphi \left(\begin{array}{c}𝒙\\ 1\end{array}\right)\left(\begin{array}{c}_1𝒙\\ 0\end{array}\right),𝔭_2=e^\varphi \left(\begin{array}{c}𝒙\\ 1\end{array}\right)\left(\begin{array}{c}_2𝒙\\ 0\end{array}\right);$$ notice that the line passing through $`[𝔭_1]`$ and $`[𝔭_2]`$ is an isotropic line. Equations (7)-(8) lead to the linear system $`_1𝔭_1`$ $`=b_1𝔭_2,`$ (9) $`_2𝔭_2`$ $`=a_2𝔭_1,`$ (10) and, in consequence, to the Laplace equations $`_1_2𝔭_1`$ $`=_2(\mathrm{log}b_1)_1𝔭_1+a_2b_1𝔭_1,`$ $`_1_2𝔭_2`$ $`=_1(\mathrm{log}a_2)_2𝔭_2+a_2b_1𝔭_2.`$ The above results can be expressed as follows. ###### Theorem 1. A surface in $`^3`$ viewed as the envelope of its tangent planes corresponds to a congruence of isotropic lines of the Plücker quadric $`𝒬_P`$; the focal nets of the congruence represent asymptotic directions of the surface. Let us equip $`^3`$ with the scalar product and consider the corresponding cross-product $`\times `$. One can show that any asymptotic net $`𝒙(u_1,u_2)`$ in $`^3`$ can be considered as a solution of the linear system $`_1𝒙=_1𝑵\times 𝑵,`$ (11) $`_2𝒙=𝑵\times _2𝑵,`$ (12) where $`𝑵(u_1,u_2)`$ is orthogonal to the surface and satisfies equation $$_1_2𝑵=q𝑵,$$ (13) with a function $`q(u_1,u_2)`$. Equation (13) was first studied by Moutard , and equations (11)-(12) connecting solutions of the Moutard equation with asymptotic nets are known as the Lelieuvre formulas . ###### Remark. It should be mentioned that the Lelieuvre formulas can be settled down within the pure affine (even projective) geometry without refering to additional structures in the ambient space One can show that $`𝑵`$ satisfies, in addition to the Moutard equation, the following linear equations $`_1^2𝑵`$ $`=(_1\varphi )_1𝑵b_1_2𝑵+d_1𝑵,`$ $`_2^2𝑵`$ $`=a_2_1𝑵+(_2\varphi )_2𝑵+d_2𝑵,`$ where $`d_1`$ $`=_2b_1+b_1_2\varphi ,`$ $`d_2`$ $`=_1a_2+a_2_1\varphi ,`$ moreover $`q`$ $`=_1_2\varphi +b_1a_2.`$ Given scalar solution $`\theta (u_1,u_2)`$ of the Moutard equations (13), consider the linear system $`_1(\theta \widehat{𝑵})`$ $`=(_1\theta )𝑵\theta _1𝑵,`$ (14) $`_2(\theta \widehat{𝑵})`$ $`=(_2\theta )𝑵+\theta _2𝑵,`$ (15) compatible due to (13). Cross-differentiation of equations (14)-(15) shows that $`\widehat{𝑵}(u_1,u_2)`$ satisfies another Moutard equation $$_1_2\widehat{𝑵}=\widehat{q}\widehat{𝑵},$$ (16) with the proportionality function $`\widehat{q}(u_1,u_2)`$ given by $$\widehat{q}=\frac{_1_2\widehat{\theta }}{\widehat{\theta }},\widehat{\theta }=\frac{1}{\theta }.$$ The transition from $`𝑵`$ to $`\widehat{𝑵}`$ relating solutions of two Moutard equations (13) and (16) is called the Moutard transformation . Simple calculation shows that the surface $$\widehat{𝒙}=𝒙+\widehat{𝑵}\times 𝑵,$$ (17) can be obtained from $`\widehat{𝑵}`$ via the Lelieuvre formulas. Notice that the straight lines $`𝒙,\widehat{𝒙}`$ are tangent to both surfaces in corresponding points, i.e., the lines form the so called Weingarten (or W for short) congruence. ###### Definition 7. Two-parameter family of straight lines in $`^3`$ tangent to two surfaces in such a way that asymptotic coordinate lines on both surfaces correspond is called W–congruence. There exists another way to find W-congruences tangent to a given asymptotic net $`𝒙`$. Because $`\theta \widehat{𝑵}\times 𝑵`$ is tangent to $`𝒙`$ therefore it can be decomposed as $$\theta \widehat{𝑵}\times 𝑵=A_1𝒙+B_2𝒙;$$ the coefficients $`A(u_1,u_2)`$ and $`B(u_1,u_2)`$ of the above decomposition define, together with $`𝒙(u_1,u_2)`$, the W–congruence. It can be shown that the coefficients satisfy the linear system $`_2A`$ $`=a_2B,`$ (18) $`_1B`$ $`=b_1A.`$ (19) Finally, we consider W–congruences in the spirit of Plücker geometry. The bi-vector $$𝔮\left(\begin{array}{c}𝒙\\ 1\end{array}\right)\left(\begin{array}{c}\widehat{𝒙}\\ 1\end{array}\right)$$ represents W–congruence. The bi-vector $`𝔮`$ in the gauge $$𝔮=\theta e^\varphi \left(\begin{array}{c}𝒙\\ 1\end{array}\right)\left(\begin{array}{c}\widehat{𝑵}\times 𝑵\\ 0\end{array}\right)=A𝔭_1+B𝔭_2,$$ satisfies, due to linear systems (9)-(10) and (18)-(19), the Laplace equation $$_1_2𝔮=(_2\mathrm{log}B)_1𝔮+(_1\mathrm{log}A)_2𝔮+\left[a_2b_1(_1\mathrm{log}A)(_2\mathrm{log}B)\right]𝔮.$$ ###### Theorem 2. W–congruences are represented by conjugate nets in the Plücker quadric $`𝒬_P`$. ## 5 Discrete asymptotic nets ###### Definition 8 (). An asymptotic lattice is a mapping $`𝒙:^2^3`$ such that any point $`𝒙`$ of the lattice is coplanar with its four nearest neighbours $`T_1𝒙`$, $`T_2𝒙`$, $`T_1^1𝒙`$ and $`T_2^1𝒙`$ (see Fig. 3). The plane in Definition 8 can be called the tangent plane of the asymptotic lattice in the point $`𝒙`$. We can express the asymptotic lattice condition in the form of the linear equations $`\mathrm{\Delta }_1\stackrel{~}{\mathrm{\Delta }}_1𝒙`$ $`=a_1\mathrm{\Delta }_1𝒙+b_1\mathrm{\Delta }_2𝒙,`$ (20) $`\mathrm{\Delta }_2\stackrel{~}{\mathrm{\Delta }}_2𝒙`$ $`=a_2\mathrm{\Delta }_1𝒙+b_2\mathrm{\Delta }_2𝒙,`$ (21) where $`\stackrel{~}{\mathrm{\Delta }}_i=1T_i^1`$, $`i=1,2`$, is the backward partial difference operator. Equations (20)-(20) can be rewritten using the backward tangent vectors as $`\mathrm{\Delta }_1\stackrel{~}{\mathrm{\Delta }}_1𝒙`$ $`=\stackrel{~}{a}_1\stackrel{~}{\mathrm{\Delta }}_1𝒙+\stackrel{~}{b}_1\stackrel{~}{\mathrm{\Delta }}_2𝒙,`$ (22) $`\mathrm{\Delta }_2\stackrel{~}{\mathrm{\Delta }}_2𝒙`$ $`=\stackrel{~}{a}_2\stackrel{~}{\mathrm{\Delta }}_1𝒙+\stackrel{~}{b}_2\stackrel{~}{\mathrm{\Delta }}_2𝒙;`$ (23) here the backward and forward data of the asymptotic lattice are related by the following formulas $`\stackrel{~}{a}_1`$ $`={\displaystyle \frac{1b_2}{D}}1,`$ $`\stackrel{~}{b}_1`$ $`={\displaystyle \frac{b_1}{D}},`$ $`\stackrel{~}{a}_2`$ $`={\displaystyle \frac{a_2}{D}},`$ $`\stackrel{~}{b}_2`$ $`={\displaystyle \frac{1a_1}{D}}1,`$ with $$D=(1a_1)(1b_2)a_2b_1=\left((1+\stackrel{~}{a}_1)(1+\stackrel{~}{b}_2)\stackrel{~}{a}_2\stackrel{~}{b}_1\right)^1.$$ The compatibility condition of the linear system (20)-(21) leads, among others, to $$T_2^1(1a_1)T_2(1+\stackrel{~}{a}_1)=T_1^1(1b_2)T_1(1+\stackrel{~}{b}_2).$$ (24) The backward asymptotic tangent lines can be represented in the line geometry by the bi-vectors $$\stackrel{~}{𝔭}_i=\left(\begin{array}{c}𝒙\\ 1\end{array}\right)\left(\begin{array}{c}\stackrel{~}{\mathrm{\Delta }}_i𝒙\\ 0\end{array}\right),i=1,2.$$ Using equations (22)-(23) it can be easily shown that $`T_1\stackrel{~}{𝔭}_1`$ $`=(\stackrel{~}{a}_1+1)\stackrel{~}{𝔭}_1+\stackrel{~}{b}_1\stackrel{~}{𝔭}_2,`$ (25) $`T_2\stackrel{~}{𝔭}_2`$ $`=\stackrel{~}{a}_2\stackrel{~}{𝔭}_1+(\stackrel{~}{b}_2+1)\stackrel{~}{𝔭}_2.`$ (26) Applying to equation (25) the shift operator $`T_2`$ and using formulas (25)-(26) yields to the an equivalent form of the discrete Laplace equation $$T_1T_2\stackrel{~}{𝔭}_1=(T_2\stackrel{~}{a}_1+1)T_2\stackrel{~}{𝔭}_1+\frac{T_2\stackrel{~}{b}_1}{\stackrel{~}{b}_1}(\stackrel{~}{b}_2+1)T_1\stackrel{~}{𝔭}_1\frac{T_2\stackrel{~}{b}_1}{\stackrel{~}{b}_1D}\stackrel{~}{𝔭}_1,$$ similarly we get $$T_1T_2\stackrel{~}{𝔭}_2=(T_1\stackrel{~}{b}_2+1)T_1\stackrel{~}{𝔭}_2+\frac{T_1\stackrel{~}{a}_2}{\stackrel{~}{a}_2}(\stackrel{~}{a}_1+1)T_2\stackrel{~}{𝔭}_2\frac{T_1\stackrel{~}{a}_2}{\stackrel{~}{a}_2D}\stackrel{~}{𝔭}_2.$$ Notice that the lines $`\stackrel{~}{𝔭}_1,\stackrel{~}{𝔭}_2`$ are generators of the Plücker quadric (both asymptotic tangents intersect in $`𝒙`$) and represent pairs $`(𝒙,\pi )`$, where $`\pi `$ is the tangent plane of the asymptotic lattice at the point $`𝒙`$. Two neighbouring tangent planes $`\pi `$ and $`T_i^1\pi `$, $`i=1,2`$, intersect along the backward tangent line represented by $`\stackrel{~}{𝔭}_i`$ (see Fig. 4). We have thus proved the following result: ###### Theorem 3. A discrete asymptotic net in $`^3`$ viewed as the envelope of its tangent planes corresponds to a congruence of isotropic lines of the Plücker quadric $`𝒬_P`$; the focal lattices of the congruence represent asymptotic directions of the lattice. ###### Corollary 4. The lattices in $`𝒬_P`$ which represent two families of asymptotic tangents of an asymptotic lattice are Laplace transforms of each other. Similarly like in the continuous case there exist the discrete analogue of the Lelieuvre representation and the discrete analog of the Moutard equation; for details see . It can be shown that $`\mathrm{\Delta }_1𝒙`$ $`=\mathrm{\Delta }_1𝑵\times 𝑵,`$ (27) $`\mathrm{\Delta }_2𝒙`$ $`=𝑵\times \mathrm{\Delta }_2𝑵,`$ (28) where the vector $`𝑵`$, orthogonal to the tangent plane of the lattice, satisfies the discrete Moutard equation (see also ) $$T_1T_2𝑵+𝑵=Q(T_1𝑵+T_2𝑵),$$ whose equivalent form is $$\mathrm{\Delta }_1\mathrm{\Delta }_2𝑵=(Q1)(\mathrm{\Delta }_1𝑵+\mathrm{\Delta }_2𝑵+2𝑵).$$ (29) We would like to add some new ingredients to the connection of the Lelieuvre representation of the asymptotic lattices and the linear system (20)-(21). The normal vector satisfies equations $`\mathrm{\Delta }_1\stackrel{~}{\mathrm{\Delta }}_1𝑵`$ $`=a_1\mathrm{\Delta }_1𝑵b_1\mathrm{\Delta }_2𝑵+d_1𝑵,`$ (30) $`\mathrm{\Delta }_2\stackrel{~}{\mathrm{\Delta }}_2𝑵`$ $`=a_2\mathrm{\Delta }_1𝑵+b_2\mathrm{\Delta }_2𝑵+d_2𝑵.`$ (31) The compatibility condition of the system (30)-(31) with the Moutard equation (29) give $`(1b_2)T_1(1+\stackrel{~}{b}_2)=`$ $`Q(T_2^1Q),`$ (32) $`(1a_1)T_2(1+\stackrel{~}{a}_1)=`$ $`Q(T_1^1Q).`$ (33) Combining equations (32)-(33) with (24) yields the following identity $$\frac{T_1T_2\stackrel{~}{a}_1+1}{(T_1Q)(T_2D)(T_2\stackrel{~}{a}_1+1)}=\frac{T_1T_2\stackrel{~}{b}_2+1}{(T_2Q)(T_1D)(T_1\stackrel{~}{b}_2+1)}=F,$$ (34) which will be used in the next Section. ## 6 Discrete W-congruences Similarly like in the continuous case, given solution $`\mathrm{\Theta }(n_1,n_2)`$ of the discrete Moutard equation (29), one can define the (discrete analog of the) Moutard transformation (see also ) by solving the linear system $`\mathrm{\Delta }_1(\mathrm{\Theta }\widehat{𝑵})`$ $`=(\mathrm{\Delta }_1\mathrm{\Theta })𝑵\mathrm{\Theta }\mathrm{\Delta }_1𝑵,`$ (35) $`\mathrm{\Delta }_2(\mathrm{\Theta }\widehat{𝑵})`$ $`=(\mathrm{\Delta }_2\mathrm{\Theta })𝑵+\mathrm{\Theta }\mathrm{\Delta }_2𝑵,`$ (36) which yields $$T_1T_2\widehat{𝑵}+\widehat{𝑵}=\widehat{Q}(T_1\widehat{𝑵}+T_2\widehat{𝑵}),$$ with new proportionality factor $$\widehat{Q}=\frac{T_1T_2\widehat{\mathrm{\Theta }}+\widehat{\mathrm{\Theta }}}{T_1\widehat{\mathrm{\Theta }}+T_2\widehat{\mathrm{\Theta }}},\widehat{\mathrm{\Theta }}=\frac{1}{\mathrm{\Theta }}.$$ Let us define the following lattice $$\widehat{𝒙}=𝒙+\widehat{𝑵}\times 𝑵;$$ (37) a simple calculation shows that formula (37) gives new asymptotic lattice with the normal vector $`\widehat{𝑵}`$ entering into the Lelieuvre formulas. The line $`𝒙,\widehat{𝒙}`$ is tangent to both lattices, therefore we have $$\mathrm{\Theta }\widehat{𝑵}\times 𝑵=A\mathrm{\Delta }_1𝒙+B\mathrm{\Delta }_2𝒙=\stackrel{~}{A}\stackrel{~}{\mathrm{\Delta }}_1𝒙+\stackrel{~}{B}\stackrel{~}{\mathrm{\Delta }}_2𝒙,$$ (38) where $`\stackrel{~}{A}`$ $`=A(\stackrel{~}{a}_1+1)+B\stackrel{~}{a}_2,`$ (39) $`\stackrel{~}{B}`$ $`=A\stackrel{~}{b}_1+B(\stackrel{~}{b}_2+1).`$ (40) Notice that the two parameter family of lines $`𝒙,\widehat{𝒙}`$ has analogous properties of that of the W–congruence from continuous case. ###### Definition 9. By a discrete W–congruence we mean two-parameter family of straight lines connecting two asymptotic lattices in such a way that the lines are tangent to the lattices in corresponding points. We have shown how the get discrete W–congruences from the Moutard transformations. It turns out that any discrete W–congruence can be obtained in this way. ###### Proposition 5. Given discrete W–congruence connecting $`𝐱`$ and $`\widehat{𝐱}`$, then the normal vectors $`𝐍`$ and $`\widehat{𝐍}`$ which define $`𝐱`$ and $`\widehat{𝐱}`$ via the Lelieuvre formulas, are related by a Moutard transformation. $$\begin{array}{ccc}𝒙& \stackrel{Wcongruence}{}& \widehat{𝒙}\\ & & & & \\ 𝑵& \stackrel{\text{Moutard transf.}}{}& \widehat{𝑵}\end{array}$$ ###### Proof. From Definition 9 it follows that $`\widehat{𝒙}`$ must be of the form $$\widehat{𝒙}=𝒙+\psi \widehat{𝑵}\times 𝑵.$$ (41) We first show that without loss of generality one can put the proportionality function $`\psi (n_1,n_2)`$ equal to $`1`$. Applying the partial difference operator $`\mathrm{\Delta }_1`$ to equation (41) and using the first part (27) of the discrete Lelieuvre formulas we get $$T_1\widehat{𝑵}\times 𝑵=T_1𝑵\times 𝑵+(T_1\psi )T_1\widehat{𝑵}\times T_1𝑵\psi \widehat{𝑵}\times 𝑵.$$ (42) The scalar products with $`T_1\widehat{𝑵}`$ and with $`𝑵`$ give $`(T_1𝑵\times 𝑵)T_1\widehat{𝑵}`$ $`=\psi (\widehat{𝑵}\times 𝑵)T_1\widehat{𝑵},`$ $`(T_1\widehat{𝑵}\times \widehat{𝑵})𝑵`$ $`=T_1\psi (T_1𝑵\times 𝑵)𝑵,`$ which after simple manipulation gives $$(T_1\psi )\psi =1;$$ (43) similarly we have $$(T_2\psi )\psi =1.$$ (44) Notice that due to equations (43) and (44) the normal vector $`\widehat{𝑵}^{}=\psi \widehat{𝑵}`$ defines the same lattice $`\widehat{𝒙}`$, which shows that in formula (41) we can put $`\psi 1`$. After such a change, formula (42) can be rewritten in the form $$(T_1𝑵\widehat{𝑵})\times (𝑵T_1\widehat{𝑵})=0,$$ which yields $$T_1𝑵\widehat{𝑵}=\lambda (𝑵T_1\widehat{𝑵}).$$ (45) Similarly we obtain $$T_2𝑵+\widehat{𝑵}=\mu (𝑵+T_2\widehat{𝑵}).$$ (46) Formulas (45)-(46) give, together with the Moutard equations satisfied by $`𝑵`$ and $`\widehat{𝑵}`$, the following equations $`\lambda Q`$ $`=(T_2\lambda )\widehat{Q},`$ $`\mu Q1`$ $`=(T_2\lambda )(\mu \widehat{Q}),`$ $`\mu Q`$ $`=(T_1\mu )\widehat{Q},`$ $`\lambda Q1`$ $`=(T_1\mu )(\lambda \widehat{Q}).`$ This gives $$(T_2\lambda )\mu =(T_1\mu )\lambda ,$$ which implies that $$\lambda =\frac{T_1\mathrm{\Theta }}{\mathrm{\Theta }},\mu =\frac{T_2\mathrm{\Theta }}{\mathrm{\Theta }},$$ (47) moreover $`\mathrm{\Theta }`$ satisfies the Moutard equation of $`𝑵`$. Finally, equations (45)-(45) with $`\lambda `$ and $`\mu `$ given by (47) can be put in the form of the Moutard transformation (35)-(36). ∎ Equation (38) imply $$\mathrm{\Theta }\widehat{𝑵}=\stackrel{~}{A}\stackrel{~}{\mathrm{\Delta }}_1𝑵\stackrel{~}{B}\stackrel{~}{\mathrm{\Delta }}_2𝑵+\stackrel{~}{C}𝑵.$$ (48) The compatibility condition of equation (48) and the Moutard transformation (35)-(36) gives, among others, $`T_1\left({\displaystyle \frac{\stackrel{~}{B}}{\stackrel{~}{b}_2+1}}\right)`$ $`={\displaystyle \frac{B}{Q}},`$ (49) $`T_2\left({\displaystyle \frac{\stackrel{~}{A}}{\stackrel{~}{a}_1+1}}\right)`$ $`={\displaystyle \frac{A}{Q}}.`$ (50) The following result is generalization of Theorem 2 to the discrete case. ###### Theorem 6. Discrete W–congruences are represented by two dimensional quadrilateral lattices in the Plücker quadric $`𝒬_P`$. ###### Proof. Lines of the W–congruence are represented by bi-vectors $$𝔮=\left(\begin{array}{c}𝒙\\ 1\end{array}\right)\left(\begin{array}{c}\mathrm{\Theta }𝑵\times 𝑵\\ 0\end{array}\right)=\stackrel{~}{A}\stackrel{~}{𝔭}_1+\stackrel{~}{B}\stackrel{~}{𝔭}_2.$$ We will show that $`𝔮`$ satisfies the Laplace equation. Because of (25)-(26) we have $`T_1𝔮`$ $`=T_1\stackrel{~}{A}\left[(\stackrel{~}{a}_1+1)\stackrel{~}{𝔭}_1+\stackrel{~}{b}_1\stackrel{~}{𝔭}_2\right]+(T_1\stackrel{~}{B})T_1\stackrel{~}{𝔭}_2,`$ $`T_2𝔮`$ $`=T_2\stackrel{~}{B}\left[(\stackrel{~}{b}_2+1)\stackrel{~}{𝔭}_2+\stackrel{~}{a}_2\stackrel{~}{𝔭}_1\right]+(T_2\stackrel{~}{A})T_2\stackrel{~}{𝔭}_1`$ and therefore $$T_1T_2𝔮=\frac{T_1T_2\stackrel{~}{A}}{T_2\stackrel{~}{A}}T_2(\stackrel{~}{a}_1+1)T_2𝔮+\frac{T_1T_2\stackrel{~}{B}}{T_1\stackrel{~}{B}}T_1(\stackrel{~}{b}_2+1)T_1𝔮+U\stackrel{~}{𝔭}_1+V\stackrel{~}{𝔭}_2,$$ (51) where $`U`$ $`=\stackrel{~}{a}_2{\displaystyle \frac{T_1T_2\stackrel{~}{A}}{T_2\stackrel{~}{A}}}T_2\left(\stackrel{~}{A}\stackrel{~}{b}_1\stackrel{~}{B}(\stackrel{~}{a}_1+1)\right)+(\stackrel{~}{a}_1+1){\displaystyle \frac{T_1T_2\stackrel{~}{B}}{T_2\stackrel{~}{B}}}T_1\left(\stackrel{~}{B}\stackrel{~}{a}_2\stackrel{~}{A}(\stackrel{~}{b}_2+1)\right)`$ $`V`$ $`=(\stackrel{~}{b}_2+1){\displaystyle \frac{T_1T_2\stackrel{~}{A}}{T_2\stackrel{~}{A}}}T_2\left(\stackrel{~}{A}\stackrel{~}{b}_1\stackrel{~}{B}(\stackrel{~}{a}_1+1)\right)+\stackrel{~}{b}_1{\displaystyle \frac{T_1T_2\stackrel{~}{B}}{T_2\stackrel{~}{B}}}T_1\left(\stackrel{~}{B}\stackrel{~}{a}_2\stackrel{~}{A}(\stackrel{~}{b}_2+1)\right).`$ Using equations (39)-(40) we get $$U=\stackrel{~}{a}_2\frac{T_1T_2\stackrel{~}{A}}{T_2\stackrel{~}{A}}T_2\left(\frac{B}{D}\right)(\stackrel{~}{a}_1+1)\frac{T_1T_2\stackrel{~}{B}}{T_2\stackrel{~}{B}}T_1\left(\frac{A}{D}\right),$$ which, due to equations (49)-(50), can be transformed to $$U=Q(T_1A)(T_2B)\left(\frac{\stackrel{~}{a}_2(T_1T_2\stackrel{~}{a}_1+1)}{A(T_1Q)(T_2D)(T_2\stackrel{~}{a}_1+1)}+\frac{(\stackrel{~}{a}_1+1)(T_1T_2\stackrel{~}{b}_2+1)}{B(T_2Q)(T_1D)(T_1\stackrel{~}{b}_2+1)}\right).$$ Identity (34) gives, together with equations (39)-(40), that $$U=\stackrel{~}{A}QF\frac{(T_1A)(T_2B)}{AB},$$ similarly, $$V=\stackrel{~}{B}QF\frac{(T_1A)(T_2B)}{AB},$$ which yields $$U\stackrel{~}{𝔭}_1+V\stackrel{~}{𝔭}_2=QF\frac{(T_1A)(T_2B)}{AB}𝔮.$$ (52) Inserting (52) to equation (51) leads to conclusion that the bi-vector $`𝔮`$ satisfies the Laplace equation. ∎ From interpretation of W–congruences as quadrilateral lattices in $`𝒬_P`$ we infer the following property (see final remarks of Section 3). ###### Corollary 7. Four neighbouring lines of a W–congruence are generators of a ruled quadric in $`^3`$. We would like to stress that the discrete W–congruences are not discrete congruences in the sense of Definition 3. In order to explain this terminological confusion we would like to make a few historical comments. At the beginning of the line geometry, by a congruence it was meant any two-parameter family of straight lines in $`^3`$. It turns out that in $`^3`$ such family has, in general, two focal surfaces. However, in more dimensional ambient space two-parameter families of lines do not have, in general, focal surfaces. From the point of view of transformations of surfaces it was necessary, therefore, to put some restrictions on the initial definition, and we read in \[17, p. 11\]: ”We call a congruence in n-space a two-parameter family of lines such that through each line pass two developable surfaces of the family.” Going further into multi-parameter families of lines and into discrete domain, in order to keep the basic property of congruences they have been defined in such a way that they have focal lattices; this requirement leads to Definition 3. In continuous case W–congruences have focal surfaces, but this is, as we mentioned above, typical property of two-parameter families of lines in $`^3`$. In our opinion, this terminological confusion suggests that it is more convenient to consider discrete W–congruences as quadrilateral lattices in the line space. ## 7 Permutability theorems In this Section we consider superposition of the Moutard transformations and the corresponding superpositions of $`W`$-transformations of asymptotic lattices. We prove also the permutability theorems for both transformations. Let $`\mathrm{\Theta }^1(n_1,n_2)`$ and $`\mathrm{\Theta }^2(n_1,n_2)`$ be two solutions of the Moutard equation of the lattice $`𝑵(n_1,n_2)`$, i.e., $$T_1T_2\left(\begin{array}{c}𝑵\\ \mathrm{\Theta }^1\\ \mathrm{\Theta }^2\end{array}\right)+\left(\begin{array}{c}𝑵\\ \mathrm{\Theta }^1\\ \mathrm{\Theta }^2\end{array}\right)=Q\left[T_1\left(\begin{array}{c}𝑵\\ \mathrm{\Theta }^1\\ \mathrm{\Theta }^2\end{array}\right)+T_2\left(\begin{array}{c}𝑵\\ \mathrm{\Theta }^1\\ \mathrm{\Theta }^2\end{array}\right)\right].$$ (53) We use $`\mathrm{\Theta }^1`$ to define the first Moutard transformation $`𝑵_1`$ of the lattice $`𝑵`$ and the corresponding transformation $`\mathrm{\Theta }_1^2`$ of $`\mathrm{\Theta }^2`$ via equations (35)-(36): $`\mathrm{\Delta }_1\left[\mathrm{\Theta }^1\left(\begin{array}{c}𝑵_1\\ \mathrm{\Theta }_1^2\end{array}\right)\right]`$ $`=(\mathrm{\Delta }_1\mathrm{\Theta }^1)\left(\begin{array}{c}𝑵\\ \mathrm{\Theta }^2\end{array}\right)\mathrm{\Theta }^1\mathrm{\Delta }_1\left(\begin{array}{c}𝑵\\ \mathrm{\Theta }^2\end{array}\right),`$ (60) $`\mathrm{\Delta }_2\left[\mathrm{\Theta }^1\left(\begin{array}{c}𝑵_1\\ \mathrm{\Theta }_1^2\end{array}\right)\right]`$ $`=(\mathrm{\Delta }_2\mathrm{\Theta }^1)\left(\begin{array}{c}𝑵\\ \mathrm{\Theta }^2\end{array}\right)+\mathrm{\Theta }^1\mathrm{\Delta }_2\left(\begin{array}{c}𝑵\\ \mathrm{\Theta }^2\end{array}\right),`$ (67) which implies that both $`𝑵_1`$ and $`\mathrm{\Theta }_1^2`$ satisfy the same Moutard equation $$T_1T_2\left(\begin{array}{c}𝑵_1\\ \mathrm{\Theta }_1^2\end{array}\right)+\left(\begin{array}{c}𝑵_1\\ \mathrm{\Theta }_1^2\end{array}\right)=Q_1\left[T_1\left(\begin{array}{c}𝑵_1\\ \mathrm{\Theta }_1^2\end{array}\right)+T_2\left(\begin{array}{c}𝑵_1\\ \mathrm{\Theta }_1^2\end{array}\right)\right],$$ where $$Q_1=\frac{T_1T_2\widehat{\mathrm{\Theta }}^1+\widehat{\mathrm{\Theta }}^1}{T_1\widehat{\mathrm{\Theta }}^1+T_2\widehat{\mathrm{\Theta }}^1},\widehat{\mathrm{\Theta }}^1=\frac{1}{\mathrm{\Theta }^1}.$$ Similarly, we use $`\mathrm{\Theta }^2`$ to define the second Moutard transformation $`𝑵_2`$ of the lattice $`𝑵`$ and the corresponding transformation $`\mathrm{\Theta }_2^1`$ of $`\mathrm{\Theta }^1`$: $`\mathrm{\Delta }_1\left[\mathrm{\Theta }^2\left(\begin{array}{c}𝑵_2\\ \mathrm{\Theta }_2^1\end{array}\right)\right]`$ $`=(\mathrm{\Delta }_1\mathrm{\Theta }^2)\left(\begin{array}{c}𝑵\\ \mathrm{\Theta }^1\end{array}\right)+\mathrm{\Theta }^2\mathrm{\Delta }_1\left(\begin{array}{c}𝑵\\ \mathrm{\Theta }^1\end{array}\right),`$ (74) $`\mathrm{\Delta }_2\left[\mathrm{\Theta }^2\left(\begin{array}{c}𝑵_2\\ \mathrm{\Theta }_2^1\end{array}\right)\right]`$ $`=(\mathrm{\Delta }_2\mathrm{\Theta }^2)\left(\begin{array}{c}𝑵\\ \mathrm{\Theta }^1\end{array}\right)\mathrm{\Theta }^2\mathrm{\Delta }_2\left(\begin{array}{c}𝑵\\ \mathrm{\Theta }^1\end{array}\right).`$ (81) Notice the modification of signs in the transformation formulas, which however do not change the fact that both $`𝑵_2`$ and $`\mathrm{\Theta }_2^1`$ satisfy the same Moutard equation $$T_1T_2\left(\begin{array}{c}𝑵_2\\ \mathrm{\Theta }_2^1\end{array}\right)+\left(\begin{array}{c}𝑵_2\\ \mathrm{\Theta }_2^1\end{array}\right)=Q_2\left[T_1\left(\begin{array}{c}𝑵_2\\ \mathrm{\Theta }_2^1\end{array}\right)+T_2\left(\begin{array}{c}𝑵_2\\ \mathrm{\Theta }_2^1\end{array}\right)\right],$$ where $$Q_2=\frac{T_1T_2\widehat{\mathrm{\Theta }}^2+\widehat{\mathrm{\Theta }}^2}{T_1\widehat{\mathrm{\Theta }}^2+T_2\widehat{\mathrm{\Theta }}^2},\widehat{\mathrm{\Theta }}^2=\frac{1}{\mathrm{\Theta }^2}.$$ Equations (60)-(81) imply that both products $`\mathrm{\Theta }^1\mathrm{\Theta }_1^2`$ and $`\mathrm{\Theta }^2\mathrm{\Theta }_2^1`$ are defined up to additive constants. Moreover, since $`\mathrm{\Delta }_1(\mathrm{\Theta }^1\mathrm{\Theta }_1^2)`$ $`=\mathrm{\Delta }_1(\mathrm{\Theta }^1)\mathrm{\Theta }^2\mathrm{\Theta }^1\mathrm{\Delta }_1\mathrm{\Theta }^2=\mathrm{\Delta }_1(\mathrm{\Theta }^2\mathrm{\Theta }_2^1)`$ $`\mathrm{\Delta }_2(\mathrm{\Theta }^1\mathrm{\Theta }_1^2)`$ $`=\mathrm{\Delta }_2(\mathrm{\Theta }^1)\mathrm{\Theta }^2+\mathrm{\Theta }^1\mathrm{\Delta }_2\mathrm{\Theta }^2=\mathrm{\Delta }_2(\mathrm{\Theta }^2\mathrm{\Theta }_2^1),`$ then one of these constants can be fixed in such a way that $$\mathrm{\Theta }^1\mathrm{\Theta }_1^2=\mathrm{\Theta }^2\mathrm{\Theta }_2^1=\mathrm{\Xi }^{12}$$ (82) holds. The following result states that there exist lattices being simultaneous Moutard transformations of $`𝑵_1`$ and $`𝑵_2`$, what can be illustrated by the diagram $$\begin{array}{ccc}𝑵& \stackrel{\mathrm{\Theta }^1}{}& 𝑵_1\\ \mathrm{\Theta }^2& & \mathrm{\Theta }_1^2& & \\ 𝑵_2& \stackrel{\mathrm{\Theta }_2^1}{}& 𝑵_{12}.\end{array}$$ ###### Theorem 8 (Permutability of the Moutard transformations). Let $`\mathrm{\Theta }^1`$, $`\mathrm{\Theta }^2`$ be solutions of the discrete Moutard equation of the lattice $`𝐍`$, and let $`𝐍_1`$, $`𝐍_2`$ be the corresponding two (discrete) Moutard transformations of $`𝐍`$. Then the functions $`\mathrm{\Theta }_2^1`$ and $`\mathrm{\Theta }_1^2`$, given by equations (60)-(82), provide by the formula $$𝑵_{12}+𝑵=\frac{\mathrm{\Theta }^1\mathrm{\Theta }^2}{\mathrm{\Xi }^{12}}(𝑵_1+𝑵_2),$$ (83) one parameter family (because of the free integration constant in $`\mathrm{\Xi }^{12}`$) of the Moutard transformations of the lattice $`𝐍_1`$ (by means of the function $`\mathrm{\Theta }_2^1`$) which are simultaneously the Moutard transformation of the lattice $`𝐍_2`$ (by means of the function $`\mathrm{\Theta }_2^1`$). ###### Proof. It is enough to verify directly that the lattice $`𝑵_{12}=𝑵_{21}`$ given by (83) is a solution of equations $`\mathrm{\Delta }_1(\mathrm{\Theta }_2^1𝑵_{21})`$ $`=(\mathrm{\Delta }_1\mathrm{\Theta }_2^1)𝑵_2\mathrm{\Theta }_2^1\mathrm{\Delta }_1𝑵_2,`$ $`\mathrm{\Delta }_2(\mathrm{\Theta }_2^1𝑵_{21})`$ $`=(\mathrm{\Delta }_2\mathrm{\Theta }_2^1)𝑵_2+\mathrm{\Theta }_2^1\mathrm{\Delta }_2𝑵_2,`$ which define the Moutard transformations of $`𝑵_2`$ by means of $`\mathrm{\Theta }_2^1`$, and that it is also a solution of equations $`\mathrm{\Delta }_1(\mathrm{\Theta }_1^2𝑵_{12})`$ $`=(\mathrm{\Delta }_1\mathrm{\Theta }_1^2)𝑵_1+\mathrm{\Theta }_1^2\mathrm{\Delta }_1𝑵_1,`$ $`\mathrm{\Delta }_2(\mathrm{\Theta }_1^2𝑵_{12})`$ $`=(\mathrm{\Delta }_2\mathrm{\Theta }_1^2)𝑵_1\mathrm{\Theta }_1^2\mathrm{\Delta }_2𝑵_1,`$ which define the Moutard transformations of $`𝑵_1`$ by means of $`\mathrm{\Theta }_1^2`$. ∎ ###### Remark. Notice that the superposition formula (83) itself is of the form of the discrete Moutard equation. This is a manifestation of the frequently observed relation between discrete integrable systems and their Darboux-type transformations . To obtain such a form of the superposition formula it was the reason of modification of signs in the Moutard transformation (74)-(81). The corresponding theorem (the discrete analog of the classical Bianchi permutability theorem ) about permutability of the $`W`$-transformations of asymptotic lattices reads as follows. ###### Theorem 9 (Permutability of the $`W`$-transformations). If $`𝐱`$ and $`𝐱_1`$ are asymptotic lattices related by a $`W`$-congruence and $`𝐱`$ and $`𝐱_2`$ are related by a second $`W`$-congruence then there can be found one-parameter family of asymptotic lattices given, in notation of Theorem 8, by $$𝒙_{12}=𝒙_{21}=𝒙+\frac{\mathrm{\Theta }^1\mathrm{\Theta }^2}{\mathrm{\Xi }^{12}}𝑵_1\times 𝑵_2,$$ (84) such that $`𝐱_1`$ and $`𝐱_{12}`$ are related by a $`W`$-congruence, and likewise $`𝐱_2`$ and $`𝐱_{12}`$. ###### Proof. Due to Proposition 5 the lattices $`𝒙_1`$ and $`𝒙_2`$ can be given by $`𝒙_1`$ $`=𝒙+𝑵_1\times 𝑵,`$ (85) $`𝒙_2`$ $`=𝒙𝑵_2\times 𝑵,`$ (86) where $`𝑵`$, $`𝑵_1`$ and $`𝑵_2`$ are like in (60)-(81); notice the change of sign in (86) induced by the change of sign in (74)-(81). Transforming lattice $`𝒙_1`$ by (86) by means of $`𝑵_{12}`$ and applying (83) one obtains (84), likewise transforming lattice $`𝒙_2`$ by formula (85) by means of $`𝑵_{12}`$. ∎ ## 8 Conclusion and remarks The main result of this paper consists in showing that the theory of asymptotic lattices and their transformations given by W–congruences forms a part of the theory of quadrilateral lattices. The discrete W–congruences can be considered as quadrilateral lattices in the Plücker quadric, therefore they provide non-trivial examples of quadrilateral lattices subjected to quadratic constraints, whose general theory was constructed in . We demonstrated also the permutability property of the corresponding $`W`$-transformations of asymptotic lattices, thus proving directly their integrability. Our result is the next step in realization of the general program of classification of integrable geometries as reductions of quadrilateral lattices. Such reductions come usually from additional structures in the projective ambient space (the close analogy to the Erlangen program of Klein), and/or from inner symmetries of the lattice itself (see also examples in ); in our case the basic underlying geometry is the line geometry of Plücker. ## Acknowledgments The author wishes to express his thanks to Maciej Nieszporski for discussions and for showing preliminary version of his paper .
no-problem/9909/astro-ph9909163.html
ar5iv
text
# The importance of radio sources in accounting for the highest mass black holes ## 1 Introduction Only $`10`$ % of quasars in optically-selected samples are radio loud. This fraction does, however, seem to be a function of quasar luminosity. Goldschmidt et al. (1999) show that the radio-loud fraction (where radio-loud is defined as having a 5GHz radio luminosity $`L_{5\mathrm{G}\mathrm{H}\mathrm{z}}>10^{24}\mathrm{WHz}^1\mathrm{sr}^1`$) increases with quasar luminosity from about 10% at $`M_\mathrm{B}=26`$ to around 40 % at $`M_\mathrm{B}=28`$ (Fig. 1). As can be seen from the figure, this does not seem to be due simply to the radio-quiet quasars becoming more radio-luminous with increasing AGN luminosity and crossing the threshold into radio loudness (as even radio-quiet quasars are not radio silent), but to a genuine increase in the relative numbers of the powerful ($`L_{5\mathrm{G}\mathrm{H}\mathrm{z}}>10^{25}\mathrm{WHz}^1\mathrm{sr}^1`$) radio sources. In this paper, we combine this observation with recent studies showing that the masses of the spheroidal components of galaxies and the mass of the central black hole are correlated (Magorrian et al. 1998; van der Marel 1999). Since at low-$`z`$ all radio-loud AGNs lie in giant ellipticals, this is strongly suggestive of a connection between radio-loudness and black hole mass. Thus we can now begin to speculate on the importance of radio-loud objects in the evolution of the most massive black holes. ## 2 Assumptions As is usual, we characterise the accretion process in quasars in terms of two parameters, the accretion efficiency $`ϵ`$, usually assumed to be $`0.1`$, and the ratio of the accretion rate to that at the Eddington limit, $`\lambda `$. We assume a bolometric correction to B-band of 12 (Elvis et al. 1994) giving a B-band quasar absolute magnitude for a black hole of mass $`M_\mathrm{h}`$ of $$M_{\mathrm{B},\mathrm{Q}}=26.2+2.5\mathrm{lg}\lambda 2.5\mathrm{lg}(M_\mathrm{h}/10^9M_{}).$$ The spheroid luminosity, using van der Marel (1999) relation converted into $`B`$-band magnitudes assuming $`BV=0.9`$, is: $$M_{\mathrm{B},\mathrm{Host}}=0.5552.5\mathrm{lg}(M_\mathrm{h}/M_{}).$$ We take $`H_0=50\mathrm{k}\mathrm{m}\mathrm{s}^1\mathrm{Mpc}^1`$ and $`\mathrm{\Omega }=1`$ throughout. ## 3 Comparing the numbers of black holes to quasars By ratioing the number density of black holes of a given mass (from Salucci et al. 1999) with the number density of quasars at the peak of AGN activity at $`z=2`$ and assuming Eddington-limited accretion, one can estimate the quasar duty cycle (Fig. 2). Given that the quasar epoch lasts for $`10^9\mathrm{yr}`$, this can be converted to a quasar lifetime assuming a single burst of activity (Richstone et al. 1998). The uncertainties in the comparison are large, however, and the ratios should only be considered order of magnitude estimates. The comparison depends on the ratio of the high luminosity/mass ends of two very steeply declining functions. Consequently the ratio of number densities is very sensitive to errors in the calibration of the black hole mass to bulge and quasar luminosity relations. Also any intrinsic scatter in the black hole mass – bulge mass relation will tend to raise the black hole to quasar ratio when integrating the mass function from a lower bound in black hole mass. Finally, the number density of any heavily obscured radio-quiet “quasar-2” population, which is probably required to explain the hard X-ray background, is uncertain. It may be about the same as the normal quasar population (Salucci et al. 1999). If so it would be comparable to the ratio of luminous radio galaxies (i.e. the radio-loud quasar-2s) to radio-loud quasars and the ratio of type 1 to type 2 Seyferts. However, Fabian (1999) suggests that quasar-2s could be ten times more common than quasar-1s. ## 4 The most luminous objects Goldschmidt et al. (1999) show that the number densities of radio-loud and radio-quiet quasars become comparable at $`M_{\mathrm{B},\mathrm{Q}}28`$. This corresponds to a black hole of mass $`5\times 10^9M_{}`$ for $`\lambda =1`$, or a host luminosity today of $`M_B23.7`$, at the top end of the observed galaxy luminosity function and consistent with the most optically-luminous FRI hosts seen today. The number density of these black holes is $`10^6`$ Mpc<sup>-3</sup>. Using the quasar luminosity function of Goldschmidt & Miller (1998) we obtain a number density of these objects at $`z=2`$ of $`2\times 10^8`$ Mpc<sup>-3</sup> (comoving), a factor of $`50`$ lower than the number density of corresponding black holes now. Adding the obscured population will raise the number density by a factor of 2-10, resulting in a total of $`10^7\mathrm{Mpc}^3`$. This is an order of magnitude less than the number density of corresponding black holes in the local Universe, but is a smaller ratio than for the black holes powering quasars close to the break in the black hole mass function where the ratio is around 100 (Fig. 2), and perhaps as high as 1000 (Richstone et al. 1998). The similarity in number densities of radio-loud and radio-quiet objects at these luminosities (unless we assume a very large population of radio-quiet quasar-2s) allows us to constrain the lifetimes of the radio-loud objects in a similar manner to those of the radio-quiets. If they had very disparate lifetimes one class or the other would dominate the mass accretion and produce objects with giant black holes, which are not seen (so far) whilst the other class would be unable to accrete a significant fraction of their black-hole mass. Thus we conclude that all the most luminous quasars, regardless of type, have duty cycles $`0.1`$, meaning a maximum active lifetime of $`10^8`$yr. This agrees with radio source ages based on lobe expansion speed estimates (Scheuer 1995), and allows a moderate amount of growth of the black hole as a black hole increases in mass by a factor $`e`$ every $`t_\mathrm{S}=4\times 10^7(ϵ/0.1)\mathrm{yr}`$. ## 5 Lower luminosity objects As the quasar luminosity decreases, the radio fraction drops off rapidly, to only $``$10% at $`M_\mathrm{B}>26`$. Why? At low redshifts radio galaxies are seen in giant elliptical hosts, but low luminosity radio-quiet quasars can be found in either ellipticals or spirals, e.g. McLure et al. 1998. Perhaps radio-loud objects are found exclusively associated with massive (and therefore rare) $`>10^9M_{}`$ black holes, accreting at sub-Eddington rates whereas low-luminosity radio-quiets continue to be associated with Eddington or near-Eddington accretion onto less massive (and therefore much more common) black holes. This picture is supported by stored energy arguments (Rawlings & Saunders 1991), which suggest $`M_\mathrm{h}\stackrel{>}{_{}}10^8M_{}`$ in even low luminosity FRII radio sources. The number density of $`L_{5\mathrm{G}\mathrm{H}\mathrm{z}}>10^{24}\mathrm{WHz}^1\mathrm{sr}^1`$ radio sources is $`5\times 10^6`$, $`20`$ times lower than the number density of $`>10^9M_{}`$ black holes. Thus radio sources with $`L_{5\mathrm{G}\mathrm{H}\mathrm{z}}10^{24}\mathrm{WHz}^1\mathrm{sr}^1`$ accreting with $`\lambda 10^3`$ would have duty cycles again constrained to be about 10%, implying lifetimes $`10^8`$ yr. An interesting alternative possibility is that these low luminosity sources are the 90 % of the very massive ($`>3\times 10^9M_{}`$) black holes which have $`\lambda 1`$ at $`z2`$. This would allow them to have duty cycles $`1`$ and lifetimes $`10^9`$ yr, which would, however, be rather longer than other radio source lifetime estimates.
no-problem/9909/astro-ph9909054.html
ar5iv
text
# Observations of a possible new soft gamma repeater, SGR1801-23 ## 1 Introduction Soft gamma repeaters (SGRs) are neutron stars in or near radio or optical supernova remnants. There is good evidence that they are ’magnetars’, i.e. neutron stars in which the magnetic field energy dominates all other sources of energy, including rotation (Duncan & Thompson 1992). In the case of SGR1806-20, evidence for this comes from observations of the period and period derivative of the quiescent soft X-ray emission (Kouveliotou et al. 1998). In the case of SGR1900+14, evidence comes from observations of both the spindown and of a giant flare (Kouveliotou et al. 1999, Hurley et al. 1999a; however, see Marsden, Rothschild, & Lingenfelter 1999 for a different interpretation). The magnetar model (Thompson and Duncan 1995) predicts a galactic birth rate of $``$ 1-10/10000 y, and a lifetime of $``$ 10000 y, so at any given time, up to 10 magnetars could be active. This is consistent with observational estimates of the magnetar birth rate and of the total number in the Galaxy (Kouveliotou et al. 1998). Only four have been identified to date, however, and various studies have placed upper limits on the number of active SGRs (e.g., Norris et al. 1991, Kouveliotou et al. 1992, Hurley et al. 1994). Taking the galactic magnetar census is therefore an interesting exercise for understanding the formation and life cycles of these unusual objects. In 1997 June, during a period when SGR1806-20 was undergoing a phase of intense activity, two bursts were observed whose positions were close to, but clearly inconsistent with that of this source. It was hoped that this new source would remain active, allowing a better determination of its position, but to date this has not happened. Therefore we present the existing data at this time, even though the picture is still incomplete. ## 2 Observations The two bursts were observed by four instruments: BATSE - CGRO (Meegan et al. 1996), Konus-A aboard the Kosmos spacecraft (Aptekar et al. 1997), Konus-W aboard the Wind spacecraft (Aptekar et al. 1995), and the GRB experiment aboard Ulysses (Hurley et al. 1992). Table 1 gives the details of the observations, including the time resolutions $`\mathrm{\Delta }\mathrm{T}`$ with which each instrument observed the bursts; the time histories are shown in Figures 1 and 2. Both are short, and have soft energy spectra, e.g. consistent with an optically thin thermal bremsstrahlung (OTTB) function with a kT of $``$ 25 keV. The peak fluxes and fluences are reported in Tables 1 and 2. Note that the peak flux of the second burst implies that the source is super-Eddington for any distance $``$ 250 pc; at the distance of the Galactic center (see below) it would be $`1200\mathrm{L}_\mathrm{E}`$. All these characteristics are typical of SGRs in general. In addition, there is evidence in the KONUS-W data for spectral evolution in the second burst (Frederiks et al. 1998): the initial phase has a spectrum consistent with an OTTB function with kT $``$ 20 keV, softening to kT $``$ 9 keV in the final phase. ## 3 Localization The second event was observed by three instruments in high time resolution modes (Table 1), leading to two statistically independent, narrow triangulation annuli. However, since two of the spacecraft (Konus-W and CGRO) were separated by only 1.4 light-seconds, these annuli have practically identical centers and radii, and therefore intersect at grazing incidence to define two long, narrow error boxes, whose lengths are constrained by the third (Konus-W/BATSE) annulus. Only one is consistent with the BATSE error circle (radius $``$ 5<sup>o</sup>), but the error box is fully contained within it, and is therefore not constrained by it. The first event was observed with high time resolution by Ulysses , but with time resolution greater than the event duration by the two Konus instruments, leading to relatively wide triangulation annuli. These are consistent with the first error box, but because this event occurred only $``$ 9000 s before the second one, the Ulysses -Earth vector moved only slightly between the two, resulting again in annuli which intersect the first error box at grazing incidence. This intersection is consistent with the coarse localization capabilties of Konus-A and Konus-W. Table 3 gives the details of the triangulation annuli, and Table 4 gives the coordinates of the error box. Initially, it was thought, based on preliminary data, that a third burst originated from this source on 1997 September 12 (Hurley et al. 1997; Kouveliotou et al. 1997) and that the Rossi X-Ray Timing Explorer had observed it in the collimated field of view of the All-Sky Monitor, providing an error box which intersected the annuli (Smith et al. 1997). However, on this day, the Ulysses -Earth vector was equidistant from this error box and the position of SGR1806-20; thus the triangulation annulus for either one of these sources would automatically pass very close to the other. When the final data were obtained and a more precise annulus could be obtained, it proved to be consistent with the position of SGR1806-20 to better than 10 $`\mathrm{}`$, making this SGR the likely source of this event. Moreover, it turned out that the burst had entered the RXTE ASM proportional counters through their sides, and no location information could in fact be extracted from the data (D. Smith, private communication). Thus the only information on the location of this new SGR comes from the triangulation annuli and the BATSE error circle. The error box, which is in the direction of the Galactic center, is shown in Figure 3. The triangulation annuli of the two bursts may also be combined using the statistical method of Hurley et al. (1999b) to derive an error ellipse. The method gives an acceptable $`\chi ^2`$, but results in an ellipse which is somewhat longer than the error box and only slightly smaller in area. Given the density of possible counterpart sources in the region of Figure 3, the error box is probably the more useful description of the SGR location. It lies $`0.93^\mathrm{o}`$ from the position of SGR1806-20. A timing error of $``$ 39 s would have to be invoked for one spacecraft in each of the two observations to achieve consistency with this SGR, and there is no evidence in any of the data for such an error. ## 4 Discussion As the four known SGRs are associated with SNRs, we have searched several catalogs for possible associations. The results are shown in Figure 3. G5.4-1.2, G6.4-0.1, and G8.7-0.1 (just visible at the left of Figure 3) are from Green (1998). G6.0-1.2 is from Goss & Shaver (1970), and all other sources are from Reich, Reich, & Fürst (1990). Not all of these objects are confirmed SNRs. Of the confirmed SNRs, only G6.4-0.1 (=W28) is consistent with the error box. However, this SNR may be associated with the pulsar B1758-23 (Kaspi et al. 1993), which lies outside the error box. G5.4-0.29, G7.2+0.2, and G8.1+0.2 are other possible associations. Given that SGR1900+14 lies outside its supernova remnant (Hurley et al. 1999c), SGR1801-23 could also be associated with an object such as G5.9-0.4, which lies slightly outside the error box. The four known SGRs are also quiescent soft X-ray sources (e.g. Hurley et al. 1999d and references therein) with fluxes $`10^{11}10^{12}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$, i.e. bright enough to be detected not only in pointed observations, but also in sky surveys. Accordingly, we have searched the ROSAT catalogs available through the HEASARC. Only two objects are close to the error box. One is the unidentified source 1WGA J1802.3-2151 in the WGA catalog (White, Giommi, & Angelini 1995), which lies slightly outside it. The other is the diffuse emission associated with W28. Finally, it has been suggested that magnetars evolve into anomalous X-ray pulsars (AXPs) (Kouveliotou et al. 1998). Sporadic bursts from an AXP could confirm this association. Accordingly, we have checked the positions of the six known (Gotthelf & Vasisht 1998 and references therein) and one proposed (Li & van den Heuvel 1999) AXPs, but none lies near this source. Given the shape and location of the error box, it is not unlikely that it will cross several interesting objects by chance coincidence, and the nature of this source therefore remains unknown. Based on the properties of the two events observed to date, it most closely resembles an SGR. Indeed, SGR1900+14 was discovered when it burst just 3 times in 3 days (Mazets et al. 1979); 13 years elapsed before it was detected again (Kouveliotou et al. 1993). Until SGR1801-23 bursts again, allowing a more accurate position to be derived for it, associating it with an SNR or quiescent soft X-ray source will be difficult. KH is grateful to JPL for Ulysses support under Contract 958056, and to NASA for Compton Gamma-Ray Observatory support under grant NAG 5-3811. On the Russian side, this work was supported by RSA Contract and RFBR grants N97-02-18067 and N99-02-17031. JvP acknowledges support from NASA grants NAG5-3674 and NAG5-7808. This study has made use of data obtained from the High Energy Astrophysics Science Archive Research Center (HEASARC), provided by NASA’s Goddard Space Flight Center.
no-problem/9909/quant-ph9909077.html
ar5iv
text
# References Legitimacy of wave-function expansion Dept. of Physics, Beijing University of Aeronautics and Astronautics, Beijing 100083, PRC C.Y. Chen, Email: cychen@public2.east.net.cn Abstract: In this letter we investigate the common procedure in which any wave function is expanded into a series of eigenfunctions. It is shown that as far as dynamical systems are concerned the expanding procedure involves various mathematical and physical difficulties. With or without introducing phase factors, such expansions do not represent dynamical wave functions. PACS numbers: 03.65-w A common concept, which is originally due to Dirac and Neumann and has been accepted as one of the essential principles of quantum mechanics, is that any wave function can be represented by a vector in the linear infinite-dimensional vector space, the Hilbert space, whose basis is formed by a complete set of orthogonal eigenfunctions\[1-4\]; namely a wave function can be expanded into $$\underset{n}{}C_n(t)\mathrm{\Psi }_n(𝐫),$$ (1) where $`n`$ stands for a set of quantum numbers labeling the eigenfunction and $`C_n(t)`$ a pure time-dependent complex number. The expanding procedure is also called the principle of superposition in some textbooks. Adopting the expanding procedure, we can write a dynamical, nonstationary, wave function in the form (with no spin regarded) $$\underset{n}{}C_n(t)\mathrm{exp}\left(i\frac{\epsilon _n}{\mathrm{}}t\right)\mathrm{\Psi }_n(𝐫),$$ (2) where $`\epsilon _n`$ is the eigenenergy of the system before a certain initial time $`t_0`$ and $`\mathrm{\Psi }_n(𝐫)`$ the eigenfunction in terms of the Hamiltonian $`H(t_0)`$. By substituting (2) into the time-dependent Schrödinger equation and making use of the orthogonality of the eigenfunctions, we arrive at a set of coupled ordinary-differential equations $$i\mathrm{}\frac{dC_n}{dt}=\underset{l}{}C_l(t)(H_1)_{nl}\mathrm{exp}(i\omega _{nl}t),$$ (3) where $`H_1=H(t)H(t_0)`$ represents the variation of the Hamiltonian (not necessarily small). It is almost unanimously believed that (1) or (2) formally represents the real wave function, and that the set of ordinary-differential equations (3) is equivalent to the corresponding Schrödinger equation. Furthermore, the value of $`|C_n|^2`$ is, in Dirac’s perturbation theory, interpreted as a transition probability from one quantum state to another. While the expanding procedure outlined above seems stringent and flawless, many difficulties that can be traced back to it have constantly bothered people in the community. Several decades ago it is noticed that the resultant formula derived from the expanding procedure in Dirac’s perturbation theory is not gauge-invariant. Later, some in the community found that coefficients of such expansion do not represent transition probabilities. To cope with these difficulties, people kept on coming up with remedies. The preferential gauge, in which the vector potential vanishes whenever the electromagnetic field becomes zero, was proposed after the debate on the gauge aspect of Dirac’s perturbation theory. Furthermore, it was suggested that a certain phase factor should be introduced to the wave function before using the standard expanding procedure. In our view, those remedies have obscured, to some extent, the essence of the problem. Actually, we can easily see the situation is unsatisfactory with or without these remedies: if we adopt the expanding procedure as it is, we take the risk to confront with other physical principles such as the gauge invariance; if we believe that introducing extra additives can make the procedure work as it should, the expanding procedure becomes more a recipe than a basic physical principle. The objective of this letter is to directly investigate the validity of the expanding procedure. It will be illustrated that as far as dynamical systems are concerned the expansion form expressed by (1) or (2) does not represent a true dynamical wave function. If we forcefully use such expansions, abnormal things will emerge. Furthermore, it will be shown that introducing extra phase factors does not really help. It is very obvious that conclusions concerning this subject are crucially important since many theories and many discussions in the literature are explicitly or implicitly related to the expansion principle discussed here. Before starting with our main discussion, it is appropriate to recall how the expanding procedure works for a stationary system. For such system, we have a time-independent Hamiltonian $`H_0`$ and eigenfunctions $`\mathrm{\Psi }_n(𝐫)`$, and any wave function of the system can indeed be expressed by $$\mathrm{\Psi }(t,𝐫)=\underset{n}{}C_n\mathrm{exp}\left(i\frac{\epsilon _n}{\mathrm{}}t\right)\mathrm{\Psi }_n(𝐫),$$ (4) where $`C_n`$’s are pure numbers independent of time and space. Note that the normalization condition $$\underset{n}{}|C_n|^2=1$$ (5) holds for (4) and it ensures the unity of the total probability and the convergence of the expansion. We now try to determine whether or not a wave function of nonstationary system can be similarly expanded. We will first investigate mathematical arguments concerning this issue. Consider a quantum system that has a stationary Hamiltonian $`H_i`$ before the initial time $`t_i`$ and another stationary Hamiltonian $`H_f`$ after the final time $`t_f`$. During the time period $`t_i<t<t_f`$, the system involves a dynamical change. In using an expansion of the form (1) to describe the wave function of the system, we need to select a set of eigenfunctions we wish to use. We may choose the set of eigenfunctions of $`H_i`$ which we denote by $`\{\mathrm{\Psi }_{n_i}\}`$, or we may equally choose the set of eigenfunctions of $`H_f`$ which we denote by $`\{\mathrm{\Psi }_{n_f}\}`$. Referring to the discussion on the stationary system, we know that the wave function can be expressed as, before $`t_i`$, $$\underset{n_i}{}C_{n_i}\mathrm{exp}\left(\frac{\epsilon _{n_i}}{\mathrm{}}t\right)\mathrm{\Psi }_{n_i}$$ (6) and after $`t_f`$ $$\underset{n_f}{}C_{n_f}\mathrm{exp}\left(\frac{\epsilon _{n_f}}{\mathrm{}}t\right)\mathrm{\Psi }_{n_f}.$$ (7) In harmony with the expansion (2), we may simply use the eigenfunctions $`\{\mathrm{\Psi }_{n_i}\}`$ to expand the wave function throughout the dynamical process. To ensure all the things go smoothly, we need to verify that any one of $`\{\mathrm{\Psi }_{n_f}\}`$ can be expressed by $`\{\mathrm{\Psi }_{n_i}\}`$. If this is possible, we will say the two sets of eigenfunctions, or the two states, are compatible. It is found that for some dynamical systems the final state is incompatible with its initial state. As an explicit example, we take a look at a particle that is trapped by a magnetic field initially and then moves in a free space since the magnetic field disappears. An eigenfunction for the particle in a uniform magnetic field is $$\mathrm{\Psi }_{n,l,k_z}(\rho ,\phi ,z)=\frac{1}{\sqrt{2\pi }}R_{n,l}(\rho )e^{il\phi }e^{ik_zz}$$ (8) where $$R_{n,l}(\rho )=\frac{T(n,l)\rho ^{|l|}}{a^{1+|l|}}e^{(\frac{\rho ^2}{4a^2})}F(n,|l|+1,\rho ^2/2a^2)$$ with $$T(n,l)=\frac{1}{|l|!}\left[\frac{(|l|+n)!}{2^{|l|}n!}\right]^{\frac{1}{2}}$$ and $`F`$ is the confluent hypergeometric function $$F(\alpha ,\gamma ,u)=1+\frac{\alpha }{\gamma }\frac{u}{1!}+\frac{\alpha (\alpha +1)}{\gamma (\gamma +1)}\frac{u^2}{2!}+\mathrm{}\mathrm{}.$$ (9) On the other hand, an eigenfunction for a free particle motion is $$(8\pi ^3)^{1/2}\mathrm{exp}(i𝐤𝐫).$$ (10) If we particularly choose a plane wave as $$\mathrm{\Psi }_k^{}(𝐫)=C^{}\mathrm{exp}(ik_z^{}z),$$ (11) and expand it into $$\mathrm{\Psi }_k^{}(𝐫)=\underset{n,l,k_z}{}C_{n,l,k_z}\mathrm{\Psi }_{n,l,k_z}.$$ (12) where $`\mathrm{\Psi }_{n,l,k_z}`$ is an eigenfunction of the type (8). The coefficient $`C_{n,l,k_z}`$ has three parameters. For the purpose of this letter, we set $`l=0`$ and will be concerned with the convergence of the series in terms of the index $`n`$ only. Along this line, we obtain $$C_{n,0,k_z}\frac{\delta (k_zk_z^{})}{a}e^{\frac{\rho ^2}{4a^2}}F(n,1,\rho ^2/2a_h^2)\rho 𝑑\rho .$$ (13) It happens that the expression above can be evaluated analytically and the result is, with the delta-function disregarded, $$C_{n+1,0,k_z}=C_{n,0,k_z}.$$ (14) which means that the series of the form (12) is not convergent in terms of the non-negative integer $`n`$. Actually, the difficulty can be seen just after we write down the two sets of eigenfunctions, (10) and (8). In the $`xy`$ plane, eigenfunctions (10) take finite values at the infinity while eigenfunctions (8) tend to zero at the infinity. It is obvious that these two kinds of eigenfunctions cannot express each other. Also note that the example presented here possesses a certain degree of generality. If we deal with a situation in which a magnetic-type field is applied to an atomlike system (having discrete and continuous spectrum), we will encounter similar difficulties. If one thinks that the argument presented above is not general enough, look at the following argument in which we show that the normalization of wave function is broken down with the expanding procedure employed. We first write down the formal integration of differential equations (3). Let the disturbance time interval be $`t_Nt_0`$, where $`t_0`$ and $`t_N`$ denote the initial time and the final time at which the perturbation turns on and off respectively. Slice the time interval into $`N`$ small equal increments $$t_0,t_1,t_2,\mathrm{}\mathrm{},t_N,$$ (15) where $`t_{i+1}t_i=(t_Nt_0)/N=\mathrm{\Delta }t`$. The coefficients $`C_k`$ can be evaluated by $$\begin{array}{c}C_k(t_1)C_k(t_0)(i/\mathrm{})\underset{k^{}}{}C_k^{}(t_0)(H_1)_{kk^{}}e^{i\omega _{kk^{}}t_0}\mathrm{\Delta }t\hfill \\ \mathrm{}\mathrm{}\hfill \\ C_k(t_{i+1})C_k(t_i)(i/\mathrm{})\underset{k^{}}{}C_k^{}(t_i)(H_1)_{kk^{}}e^{i\omega _{kk^{}}t_i}\mathrm{\Delta }t\hfill \\ \mathrm{}\mathrm{}.\hfill \end{array}$$ (16) According to Euler and Cauchy, if we slice the entire time into a sufficiently large number of intervals or if we are just concerned with the system’s behavior within a sufficiently short time, the integral expression (16) can be accepted as an accurate one. Suppose that the system is initially in one eigenstate, namely we assume $$C_s(t_0)=1,C_k(t_0)=0(ks),$$ (17) then we find that at the later time $`t_1`$ $$C_s(t_1)=1\frac{i}{\mathrm{}}(H_1)_{ss}\mathrm{\Delta }t.$$ (18) Since $`H_1=H(t)H(t_0)`$ is a self-adjoint operator, $`(H_1)_{ss}`$ is a real number. (For a uniform electric field, we can assume $`H_1x`$ and $`x_{ss}`$ is just the average $`x`$-position in the initial state.) Then, (18) tells us that $$|C_s(t_1)|^21.$$ (19) For other coefficients, we have $$\underset{ks}{}|C_k(t_1)|^20.$$ (20) Eqs. (19) and (20) lead us to $$\underset{k}{}|C_k(t_1)|^21.$$ (21) The equality sign holds only when all the matrix elements of $`H_1`$ are zero and the system gets no disturbed. In that case, we investigate the value of the same expression at the time $`t_2`$. In such a way, we will surely find $$\underset{k}{}|C_k(t_i)|^2>1.$$ (22) This clearly shows that the expansion of the form (2) or (1) is not the correct formal solution of the time-dependent Schrödinger equation. (In this letter, we take it for granted that the Schrödinger equation preserves the unity of the total probability.) One interesting question arises immediately: what does a dynamical wave function look like? The discussion above has suggested that such wave function must involve coupled time-and-space dependence, which cannot be handled with the variable-separation technique. (Finally, incompatible functions may also get involved.) Functions that can be expressed by (1) or (2) constitutes a special type of functions; but, paradoxically, none of them represents a dynamical wave function. We now turn to physical arguments of the issue and in passing we will study effects of introducing phase factors. Before doing that it is enlightening to recall the well-known concept in quantum mechanics that the gauge fields, Hamiltonian of system, phase factor of wave function and operators representing observables are not uniquely defined and they covary in accordance to ($`m=Q=c=1`$ in this letter) $$\{\begin{array}{c}(𝐀,\mathrm{\Phi })(𝐀^{}=𝐀+f,\mathrm{\Phi }^{}=\mathrm{\Phi }_tf)\hfill \\ H(𝐀,\mathrm{\Phi })H(𝐀^{},\mathrm{\Phi }^{})\hfill \\ \mathrm{\Psi }(t,𝐫)e^{if(t,𝐫)}\mathrm{\Psi }(t,𝐫)\hfill \\ L(𝐩𝐀,𝐫)L(𝐩𝐀^{},𝐫),\hfill \end{array}$$ (23) where $`f(t,𝐫)`$ is an arbitrary differentiable function of time and space. If and only if the covariance relationship given above is observed the physical outcomes of the formalism will be strictly gauge-invariant. We can immediately see that our expansion principle will suffer in this respect. The third equation of (23) illustrates that even if the wave function of interest could be expressed by (1) or (2) in a special gauge, the wave function in a general gauge would involve an extra nontrivial phase factor $`e^{if(t,𝐫)}`$. Such nontrivial phase factors pose a serious difficulty for the standard expanding procedure expressed by (1), (2) as well as the differential-equation set (3). This partly explains why the time-dependent perturbation theory encounters gauge difficulties. At this point, it is in order to point out that the standard expansion procedure is in conflict with the covariance relationship expressed by (23). Consider a quantum particle that is bounded in a mechanical well. Its state before $`t=0`$ is assumed to be $$\mathrm{\Psi }(t<0)=\mathrm{exp}\left(i\frac{\epsilon _s}{\mathrm{}}t\right)\mathrm{\Psi }_s(𝐫).$$ (24) At and after $`t=0`$, an electromagnetic disturbance is applied to the system. If we admit the standard expansion principle, the wave function after $`t=0`$ must be in the form $$\mathrm{\Psi }(t0)=\underset{l}{}C_l(t)\mathrm{exp}\left(i\frac{\epsilon _l}{\mathrm{}}t\right)\mathrm{\Psi }_l(𝐫),$$ (25) where $$C_s(0)=1,C_l(0)=0(\mathrm{for}ls).$$ (26) We note that the quantum system has its own inertia (the heavier the particle the larger the inertia) so that all coefficients in the expansion, as well as physical observables associated with the system, must vary tardily in comparison with the variation of the disturbance field. The velocity operator reads according to the standard definition $$𝐯=𝐩𝐀=i\mathrm{}𝐀.$$ (27) where $`𝐀`$ is the vector potential chosen to represent the electromagnetic disturbance. Before $`t=0`$, the average velocity of the particle is $$\overline{𝐯}=\mathrm{\Psi }_s|i\mathrm{}|\mathrm{\Psi }_s.$$ (28) At the instant $`t=0+ϵ`$, where $`ϵ`$ is a truly small time interval, the average velocity becomes $$\overline{𝐯}=\mathrm{\Psi }_s|i\mathrm{}|\mathrm{\Psi }_s\mathrm{\Psi }_s|𝐀|\mathrm{\Psi }_s.$$ (29) The trouble with the expansion (25) is now obvious: not only that the average value of the velocity is gauge-dependent but also that its value varies as promptly as the vector potential. Actually, all observables, including the transition probability, suffer from the same difficulty with the use of the expansion (25). There remains one final question concerning whether or not the expression $$e^{if(t,𝐫)}C_n(t)\mathrm{\Psi }_n(𝐫)$$ (30) represents a true dynamical wave function. Our answer to it is a negative one. If this expression represented a dynamical wave function, we could find a special gauge under which $$C_n(t)\mathrm{\Psi }_n(𝐫)$$ (31) would represent a dynamical wave function. According to the arguments presented in this letter, particularly those independent of gauge, this is not at all possible. In conclusion, it has been shown that the common expanding procedure, or the so-called principle of superposition, holds only for stationary systems. However, we notice that in the real life many quantum theories concerning nonstationary dynamical processes have assumed the general validity of it. Discussion with Professors R. G. Littlejohn and Dongsheng Guo is gratefully acknowledged. This work is partly supported by the fund provided by Education Ministry, PRC.
no-problem/9909/math9909098.html
ar5iv
text
# Congruence ABC implies ABC ## 1 Introduction The ABC conjecture was introduced by Masser and Oesterlé in 1985, and has since been shown to be related to many other conjectures, especially conjectures regarding the arithmetic of elliptic curves . For our purposes, an ABC-solution $`𝐬`$ is a triple $`(a,b,c)`$ of distinct relatively prime integers satisfying $`a+b+c=0`$, and such that $`a`$ and $`b`$ are negative. (The requirement that the integers be distinct is included only to simplify the exposition below.) If $`n>0`$ is an integer, the radical $`\mathrm{rad}(n)`$ is defined to be the product of all primes dividing $`n`$. For any $`ϵ>0`$, we define a function on ABC-solutions by $$f(𝐬,ϵ)=\mathrm{log}(c)(1+ϵ)\mathrm{log}\mathrm{rad}(abc).$$ Then the ABC conjecture can be phrased as follows: ###### Conjecture 1 (ABC conjecture). For each $`ϵ>0`$, there exists a constant $`C_ϵ`$ such that $$f(𝐬,ϵ)<C_ϵ$$ for all $`𝐬`$. In , Oesterlé showed that the ABC conjecture is equivalent to a conjecture of Szpiro on elliptic curves (\[2, Conj. 4\]) In the proof, he observes that if the ABC conjecture is known to hold for all $`(a,b,c)`$ with $`16|abc`$, then the full ABC conjecture can be shown to hold. This suggests considering a family of weaker conjectures indexed by integers $`N`$, as follows: ###### Conjecture 2 (Congruence ABC conjecture for $`N`$). For each $`ϵ>0`$, there exists a constant $`C_ϵ`$ such that $$f(𝐬,ϵ)<C_ϵ$$ for all $`𝐬`$ such that $`N|abc`$. It has long been known to experts that the congruence ABC conjecture for any $`N`$ is equivalent to the full ABC conjecture. However, a proof has never to our knowledge appeared in the literature, and we take the opportunity to provide one in this note. ## 2 Congruence ABC implies ABC ###### Theorem 3. The congruence ABC conjecture for $`N`$ implies the ABC conjecture. ###### Proof. For each positive even integer $`n`$, we define an operation $`\mathrm{\Theta }_n`$ on ABC-solutions as follows. Let $`𝐬=(a,b,c)`$ be an ABC-solution. Then $$\mathrm{\Theta }_n(𝐬)=(2^m(ab)^n,2^m[c^n(ab)^n],2^mc^n)$$ where $`m=n`$ if $`c`$ is even, and $`m=0`$ otherwise. Then $`\mathrm{\Theta }_n(𝐬)`$ is again an ABC-solution. ###### Lemma 4. There exist constants $`c_{n,ϵ}>0`$ and $`c_{n,ϵ}^{}`$ such that $$f(\mathrm{\Theta }_n(𝐬),ϵ/[n+(n1)ϵ])c_{n,ϵ}f(𝐬,ϵ)+c_{n,ϵ}^{}.$$ ###### Proof. Let $`A=2^m(ab)^n,B=2^m(c^n+(ab)^n),C=2^mc^n`$. Then $$\mathrm{log}\mathrm{rad}(ABC)\mathrm{log}|ab|+\mathrm{log}\mathrm{rad}(abc)+\mathrm{log}\mathrm{rad}(B/ab).$$ Now $`B/ab`$ $`=`$ $`{\displaystyle \frac{(a+b)^n(ab)^n}{ab}}`$ $`=`$ $`4[(a+b)^{n2}+(a+b)^{n4}(ab)^2+\mathrm{}+(ab)^{n2}]`$ $``$ $`2n(a+b)^{n2}.`$ So $`\mathrm{log}\mathrm{rad}(ABC)`$ $``$ $`\mathrm{log}|ab|+\mathrm{log}\mathrm{rad}(abc)+(n2)\mathrm{log}c+\mathrm{log}2n`$ $``$ $`(n1)\mathrm{log}c+\mathrm{log}\mathrm{rad}(abc)+\mathrm{log}2n`$ $`=`$ $`(n1)\mathrm{log}c+(1+ϵ)^1(\mathrm{log}cf(𝐬,ϵ))+\mathrm{log}2n`$ $`=`$ $`n\mathrm{log}cϵ(1+ϵ)^1\mathrm{log}c(1+ϵ)^1f(𝐬,ϵ)+\mathrm{log}2n`$ $`=`$ $`(1ϵ[n(1+ϵ)]^1)(\mathrm{log}C+m\mathrm{log}2)`$ $`(1+ϵ)^1f(𝐬,ϵ)+\mathrm{log}2n.`$ It follows that $$\mathrm{log}C\frac{n(1+ϵ)}{(n+nϵϵ)}(\mathrm{log}\mathrm{rad}(ABC)+(1+ϵ)^1f(𝐬,ϵ)\mathrm{log}2n)n\mathrm{log}2.$$ Thus $$f(\mathrm{\Theta }_n(𝐬),ϵ/[n+nϵϵ])c_{n,ϵ}f(𝐬,ϵ)+c_{n,ϵ}^{}.$$ where $$c_{n,ϵ}=n/(n+nϵϵ)$$ and $$c_{n,ϵ}^{}=n(1+ϵ)(\mathrm{log}2n)/(n+nϵϵ)n\mathrm{log}2.$$ We now proceed with the proof of Theorem 3. Assume that the congruence ABC conjecture for $`N`$ is true. Then there exists $`C_ϵ`$ such that $$f(𝐬,ϵ)<C_ϵ$$ for all $`𝐬`$ with $`N|abc`$. Let $`n=\varphi (N)`$. If $`N=2`$, Theorem 3 is trivial. We may therefore assume that $`n`$ is even. ###### Lemma 5. Let $`(A,B,C)=\mathrm{\Theta }_n(𝐬)`$. Then $`N|ABC.`$ ###### Proof. Suppose $`p`$ is an odd prime dividing $`N`$, and let $`p^\nu `$ be the largest power of $`p`$ dividing $`N`$. Then $`(p1)p^{\nu 1}|n`$. In particular, $`\nu <n`$. If $`p`$ divides $`c`$ or $`ab`$, then $`p^\nu |p^n|ABC`$. If $`p`$ divides neither $`c`$ nor $`ab`$, then $`A=2^m(ab)^n`$ and $`C=2^mc^n`$ are congruent mod $`p^\nu `$; therefore, $`p^\nu |B`$. Now let $`2^\nu `$ be the largest power of $`2`$ dividing $`N`$. If $`c`$ is even, then so is $`ab`$, and exactly one of $`c`$ and $`ab`$ is a multiple of $`4`$. Thus, one of $`(ab)^n`$ and $`c^n`$ is a multiple of $`4^n`$. Since, in this case, $`A=2^n(ab)^n`$ and $`C=2^nc^n`$, one of $`A`$ and $`C`$ is a multiple of $`2^n`$, whence also of $`2^\nu `$. If, on the other hand, $`c`$ is odd, then so is $`ab`$. Then $`(ab)^n`$ and $`c^n`$ are both congruent to $`1`$ mod $`2^nu`$, so $`2^\nu |B`$. ∎ For any $`ϵ>0`$, it now follows that $$f(𝐬,ϵ)c_{n,ϵ}^1(f(\mathrm{\Theta }_n(𝐬),ϵ/[n+nϵϵ])c_{n,ϵ}^{})c_{n,ϵ}^1(C_{ϵ/[n+nϵϵ]}c_{n,ϵ}^{})$$ for any ABC-solution $`𝐬`$. Since the right hand side depends only on $`ϵ`$ and $`N`$, this proves the full ABC conjecture. ∎ ## Acknowledgments It is my pleasure to thank Joseph Oesterlé and the referee for helpful comments on this note.
no-problem/9909/astro-ph9909327.html
ar5iv
text
# Discovery of Young Stellar Objects at the Edge of the Optical Disk of Our Galaxy ## 1 Introduction Digel, de Geus, & Thaddeus (1994) found more than 10 molecular clouds possibly beyond the optical disk of our Galaxy in the direction to the Perseus arm (l $``$ 130). Their Galactic radius ($`R_\mathrm{g}`$) is estimated at more than 20 kpc and as much as 28 kpc (see also, Digel et al. 1996; Heyer & Terebey 1998). Because the distribution of Population I and Population II stars in the Galaxy have a sharp cutoff at around 18–20 kpc and 14 kpc, respectively (Digel et al. 1994, and references therein), these distant molecular clouds are potentially very interesting sites to investigate the star-forming process away from the Galactic disk with little or no perturbation from the spiral arms. In such an outermost Galaxy region, the molecular gas surface density is much smaller than in spiral arms (Heyer et al. 1998; Heyer & Terebey 1998; Digel et al. 1996) and the H I surface density is one fifth to one tenth of that in the spiral arms (e.g., Wouterloot et al. 1990). Thus, the global star formation environment in the outermost Galaxy region is quite different from that in the spiral arms. Also, metallicity is very low in such a region. The metal abundance at $`R_g`$ = 20 kpc is estimated at 12 + log(O/H) $``$ 8.0, assuming the standard abundance curve (e.g, Smartt & Rolleston 1997). This metallicity is comparable to that of dwarf irregular galaxies or some damped Ly$`\alpha `$ systems of higher metallicity (see Ferguson, Gallagher, & Wyse 1998, and references therein). Therefore, studies of star formation in the outermost Galaxy may reveal the details of the star formation process in an environment similar to that thought to exist during the early stage of the formation of the Galactic disk. Of the 11 distant molecular clouds found by Digel et al. (1994, 1996), Cloud 2 has the largest kinematic $`R_\mathrm{g}`$ of 28 kpc. If this kinematic $`R_\mathrm{g}`$ is correct, Cloud 2 is located far beyond the optical disk of our Galaxy and at the edge of the H I gas disk (around $``$30 kpc, e.g., Kulkarni, Blitz, & Heiles 1982). Cloud 2 has a high CO luminosity (M<sub>CO</sub> $``$ 3.7 $`\times `$ 10<sup>4</sup> $`M_{}`$) that suggests star formation activity in this cloud (Digel et al. 1994). Indeed, de Geus et al. (1993) found an extended H$`\alpha `$ emission that has the same radial velocity as Cloud 2 ($`V_{\mathrm{LSR}}`$ = $``$103 km s<sup>-1</sup>). They concluded that the H$`\alpha `$ traces an H II region associated with Cloud 2, and they proposed an early B-type star near Cloud 2 (“MR-1”: Muzzio & Rydgren 1974; see also Smartt et al. 1996) as the photoionizing source. However, no star-forming activities like those in the nearby star-forming molecular clouds have been reported thus far. Smartt et al. (1996) obtained high spectral resolution optical spectra of MR-1 and found $`R_\mathrm{g}`$ $``$ 15 to 19 kpc for MR-1 based on the atmospheric parameters from their spectra and the photometry by Muzzio & Rydgren (1974)<sup>1</sup><sup>1</sup>1 Smartt et al. (1996) derived an $`R_\mathrm{g}`$ of 15 kpc (heliocentric distance of 8.2 kpc) based on LTE model of the optical spectrum. They suggested that a non-LTE model can make it larger up to 19 kpc (heliocentric distance of 12 kpc). Because the non-LTE model is more likely for stars like MR-1 with high effective temperatures as described by Smartt et al. (1996), we assume $`R_\mathrm{g}`$ $`=`$ 19 kpc and a heliocentric distance of 12 kpc hereafter. The $`R_\mathrm{g}`$ of Earth is assumed to be 8.5 kpc. . They suggest that MR-1 is probably physically related to the H II region (de Geus et al. 1993) because the radial velocity of MR-1 ($``$90 $`\pm `$ 13 km s<sup>-1</sup>) is in reasonable agreement with the nebular velocity (–103 km s<sup>-1</sup>). If this is the case, Cloud 2 is located close to the edge of the optical disk ($`R_\mathrm{g}`$ $``$ 20 kpc) rather than far beyond the optical disk. However, it still remains as one of the most distant molecular clouds/H II regions known to date. The metal abundance of MR-1 is estimated at 12 $`+`$ log(O/H) $``$ 8.3 (Smartt & Rollenston 1997), which is comparable to that for irregular dwarf galaxies (e.g., $``$8.4 for the Large Magellanic Cloud; Arnault et al. 1988). Here we report a discovery of young stellar objects (YSOs) associated with Cloud 2 made during our near-infrared studies. These sources could shed light on the star formation processes in such a low-density and low-metallicity environment as well as the distance to Cloud 2. We have made comprehensive near-infrared observations of Cloud 2 that include a wide-field survey, spectroscopy of detected infrared sources, and deep imaging for the purpose of detecting low-mass YSOs. The details of our study will be reported in subsequent papers. ## 2 Observations and Results In October 1997, we made an initial near-infrared survey of Cloud 2 with University of Hawaii’s QUIST (Quick Infrared Survey Telescope) mounted at the UH 0.6 m telescope atop Mauna Kea. QUIST consists of University of Hawaii’s QUIRC (Quick Infrared Camera), a near-infrared camera with 1024$`\times `$1024 HgCdTe HAWAII array, and 25.4 cm Cassegrain telescope that provides a 25$`\mathrm{}`$ field of view with a 1.5$`\mathrm{}`$ pixel<sup>-1</sup> scale.<sup>2</sup><sup>2</sup>2The 0.6-m telescope is not used. The QUIST telescope is attached to its equatorial mount. The observing was done remotely from the Institute for Astronomy in Honolulu. The observations were partly affected by intermittent cirrus. Several standard stars from Elias et al. (1982) were observed at several airmass positions for photometric calibration. We obtained images of a field centered on Cloud 2 in three near-infrared bands, $`J`$ (1.25 $`\mu `$m), $`H`$ (1.65 $`\mu `$m), and $`K`$ (2.2 $`\mu `$m). The total integration times for each band were 36 min, 36 min, and 45 min, respectively. We detected seven red sources associated with Cloud 2 with QUIST (Fig. 1). The coordinates and near-infrared magnitudes of all sources and MR-1 (Muzzio & Rydgren 1974) are summarized in Table 1. All of the near-infrared sources are associated with $`IRAS`$ sources in Cloud 2: IRAS 02450+5816 for IRS 1; IRAS 02447+5811 for IRS 2, 3, 4, 5; and IRAS 02455+5808 for IRS 6&7 (Fig. 2a and Table 2). JHK photometry has been performed using IRAF APPHOT tasks.<sup>3</sup><sup>3</sup>3 IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation. An aperture of 18$`\mathrm{}`$ was employed. The resultant $`JH`$ vs. $`HK`$ color-color diagram is shown in Figure 3. We made follow-up $`K`$-band spectroscopy of IRS 1 and IRS 2 with the near-infrared spectrograph CGS4 at UKIRT<sup>4</sup><sup>4</sup>4The United Kingdom Infrared Telescope is operated by the Joint Astronomy Centre on behalf of the U.K. Particle Physics and Astronomy Research Council. in December 1997. IRS 1 and 2 are two bright sources near the northern and southern clumps of the molecular cloud, respectively (see Fig. 2). A 40 grooves mm<sup>-1</sup> grating that provides a spectral resolution of $`\lambda /\mathrm{\Delta }\lambda `$ $`=`$ 900 was used. Because seeing was excellent, we used a narrow (0$`\stackrel{}{\mathrm{.}}`$6) slit with the tip-tilt secondary. Observing conditions were photometric. To achieve sufficient sampling, we took three exposures with one-third pixel shift between exposures. After basic reductions (e.g., sky subtraction and flattening), one-dimensional spectra were extracted with standard IRAF tasks. The standard star HR831 was used for the correction of atmospheric extinction and flux calibration. The Br$`\gamma `$ absorption line in the standard spectrum was removed by interpolation before the extinction correction. The results are shown in Fig. 4. When observing IRS1, the humidity was so low and stable that we could clearly detect emission lines in spectral regions of significant telluric water vapor absorption. The details of this spectroscopy will be described in a separate paper with results from our new CGS4 spectroscopy of additional Cloud 2 sources (Kobayashi & Tokunaga 1999a). ## 3 Discussion ### 3.1 Near-infrared Sources Among all the detected sources in the observed field, the seven red sources are distinctively red as shown in the true color pictures (Fig. 1) and in the ($`JH`$) vs. ($`HK`$) color-color diagram (Fig. 3). All of the red sources are associated with Cloud 2, and no other bright red sources were found apart from Cloud 2 in the surveyed field (Fig. 1). All sources except IRS 2 appeared to be point sources with the QUIST spatial resolution ($``$2$`\mathrm{}`$). IRS 2 is discussed in more detail in §3.3. The Five College Radio Astronomy Observatory (FCRAO) <sup>12</sup>CO survey data (Heyer et al. 1998)<sup>5</sup><sup>5</sup>5We obtained the FCRAO data electronically from the NASA Astronomical Data Center at http://adc.gsfc.nasa.gov/. show no foreground clouds in the direction of Cloud 2. However, a number of small foreground clouds are around Cloud 2 in the surveyed field: a small cloud in the local arm at 5$`\mathrm{}`$ southward, another small cloud in the local arm at 10$`\mathrm{}`$ westward, a small cloud associated with the Perseus arm at 10$`\mathrm{}`$ eastward, and a large cloud associated with the Perseus arm at 20$`\mathrm{}`$ northward. In spite of the existence of many foreground clouds in the surveyed field, we detected red sources only in the small area centered at Cloud 2. This result strongly suggests that all the red sources are physically associated with Cloud 2. All seven sources show a large $`HK`$ excess of more than 0.8. As shown in Figure 3, the $`HK`$ excesses of the sources except IRS 4 are not due to interstellar extinction but are caused by intrinsically large $`HK`$ excess. YSOs, highly obscured late-type stars (e.g, OH-IR stars, protoplanetary nebulae (PPNs)), and active galactic nuclei (AGNs) are known to show large intrinsic $`HK`$ excess from dust emission (e.g., Lada & Adams 1992 for YSOs; García-Lario et al. 1997 for late-type stars; and Hunt et al. 1997 for AGNs). Although it is difficult to distinguish between these three classes of objects solely from near-infrared colors, the red sources are most likely YSOs in view of the association with the molecular cloud. The two brightest red sources, IRS 6 and 7, which, among the red sources, are most distant from Cloud 2 on the sky (Fig. 2a), might be foreground stars in view of their brightness (a few magnitudes brighter than other sources in Cloud 2; see Table 1) and relatively large angular distance from Cloud 2 (7$`\mathrm{}`$–8$`\mathrm{}`$ from the CO peaks; see Fig. 2a). Also, they are located at the edge of one of the foreground molecular clouds in the Perseus arm. These two sources are associated with the bright $`IRAS`$ point source, IRAS 02455+5808, but not resolved within the $`IRAS`$ beam (1 $`\sigma `$ ellipse of 37<sup>′′</sup> $`\times `$ 10<sup>′′</sup> with PA $`=`$ 59). The $`IRAS`$ color is typical for various kinds of objects (e.g., galaxies, YSOs, planetary nebulae) and does not reveal the nature of IRS 6 and 7 clearly. The pointlike appearance and extremely red near-infrared color ($`HK`$ $`>`$ 1.5) suggest that they are at least Galactic stars. Further study is necessary to clarify the nature of those sources. MR-1 is located on a reddening vector from early-type stars. The visual extinction of MR-1 is estimated at about $`A_V=`$ 3 to 4 mag from this color-color diagram. This is consistent with the estimate from $`B`$ and $`V`$ photometry of $`A_V=`$ 3.1 mag (Muzzio & Rydgren 1974). ### 3.2 IRS 1 The $`K`$-band spectrum of IRS 1 (Fig. 4) shows three strong hydrogen recombination lines: Pa$`\alpha `$ (1.875 $`\mu `$m), Br$`\delta `$ (1.945 $`\mu `$m), and Br$`\gamma `$ (2.166 $`\mu `$m). Those lines show a blueshift of about 100 to 200 km s<sup>-1</sup>, suggesting IRS 1 is not an extragalactic object. Also, our $`K`$-band spectrum shows that IRS 1 is unlikely to be an OH/IR star or a PPN because these objects do not usually show hydrogen emission lines. OH/IR stars show strong CO/H<sub>2</sub>O absorption lines (Nagata 1999) and PPNs usually show hydrogen absorption lines (Oudmaijer et al. 1995; Hrivnak, Kwok, & Geballe 1994). Although a few PPNs, possibly more evolved than most PPNs, are known to show near-infrared hydrogen emission lines like planetary nebulae (e.g., Aspin et al. 1993 for M 1-16; Thronson 1981 for AFGL 618), it is highly unlikely that such a rare source is located near an $`IRAS`$ source in a molecular cloud (Fig. 2a). Instead, it is highly plausible that the near-infrared emission lines are signatures of an H II region around YSOs. For the reasons above, we conclude that IRS 1 is a YSO physically associated with Cloud 2. Assuming the Galactic radius of 19 kpc for IRS 1 (heliocentric distance = 12 kpc), the $`K`$-band absolute magnitude without any correction for extinction is $`M_K`$ = $``$ 2.4 mag. This is comparable to those for high to intermediate mass YSOs such as Herbig Ae/Be stars (e.g., Hillenbrand et al. 1992). We estimate the spectral type of IRS 1 roughly at mid to late B from the K-band apparent magnitudes and distances for the Herbig Ae/Be samples in Hillenbrand et al (1992). ### 3.3 IRS 2 IRS 2 is located at the southern peak of the CO molecular cloud as well as at the center of the error ellipse of $`IRAS`$ 02450+5816 (Figs. 2a, 2b). IRS 2, 3, 4, and 5 form a cluster of red sources near the southern CO peak (Figs. 2a, 2b); IRS 2 is the brightest. The near-infrared color of IRS 2 shows that it is as highly extinguished ($`A_V`$ $``$ 10 mag) as IRS 1. IRS 2 appeared to be extended in the QUIST image with FWHM of $``$7$`\mathrm{}`$ (Fig. 1b). We recently obtained deep $`JHK`$ images of Cloud 2 with higher spatial resolution and found that IRS 2 is a cluster of more than 20 red pointlike sources (Kobayashi & Tokunaga 1999b). This morphology strongly suggests that IRS 2 is a star cluster in or behind the molecular cloud. Further observations are necessary to clarify the nature of IRS 2 as well as of IRS 3/4/5. ### 3.4 Star Formation in Cloud 2 The ionized gas traced by H$`\alpha `$ emission extends from MR-1 toward Cloud 2. The peaks of the H$`\alpha `$ emission are between the molecular cloud peaks and MR-1 (de Geus et al. 1993; see also Fig. 2a). Since IRS 1 is located at the center of the H$`\alpha `$ emission, it could also be one of the major ionizing sources of this H II region. However, MR-1 is likely to dominate the ionization of the entire H II region because the number of ionizing photons from IRS 1 is expected to be much lower than that from MR-1, assuming a spectral type of mid- to late-B and B0–1, respectively, for IRS 1 and MR-1. The $`IRAS`$ source associated with IRS 1 (02450+5816) is located between the H$`\alpha `$ peak and northern CO peak of Cloud 2 (Figs. 2a, 2b). This pattern is typical for Galactic H II regions (e.g., Gatley et al. 1979 for M17): young OB stars photoionize the surface of an associated molecular cloud and make the warm dust region which is traced by $`IRAS`$ 60/100$`\mu `$m flux. IRAS 02450+5816 is not detected at 12 or 25 $`\mu `$m but only at 60 and 100 $`\mu `$m. Its - color temperature (about 30 K; assuming emissivity $`ϵ_\lambda `$ $``$ $`\lambda ^2`$) is significantly lower than those for stars, planetary nebulae, single YSOs or active galaxies (see, e.g., Walker et al. 1989). Also, IRAS 02450+5816 is cataloged with a “small-scale structure flag,” which denotes an association with a confirmed extended source (IRAS Point Source Catalogue 1988). These characteristics suggest that the $`IRAS`$ source is a warm extended region adjacent to a molecular cloud rather than a single object with compact far-infrared emission. Judging from the low dust temperature ($``$30 K), the warm region is not a prominent photodissociation region (PDR) in a young star cluster (e.g., S140: Timmermann et al. 1996) but a less energetic one in a dark cloud such as $`\rho `$ Oph (Liseau et al. 1998). This is consistent with the suggestion by de Geus et al. (1994) that Cloud 2 is more like a dark cloud (e.g., Taurus dark cloud) than a large star-forming complex with OB star cluster (e.g., Orion molecular cloud complex). The bolometric luminosity of IRAS 02450+5816 is estimated to be $`L_{\mathrm{IR}}`$ $``$ 1000 $`L_{}`$ from the $`IRAS`$ flux densities and assuming a 12 kpc heliocentric distance (Emerson 1988; Tokunaga 1999). Assuming a spectral type of B0V–B1V, the luminosity of MR-1 is expected to be $`L_{\mathrm{IR}}`$ $``$ 10<sup>4</sup> $`L_{}`$ if all the emitting photons are entirely absorbed by the molecular cloud (see Fig. 2 in MacLeod et al. 1998). If we assume that the northern peak of Cloud 2 covers only 10% of the sphere centered at MR-1 (the cone of 60 apex angle), the observed $`IRAS`$ luminosity can be explained by the ionization of MR-1. Although it is hard to estimate a precise solid angle from the current data, it is likely that MR-1 is the major ionizing source exciting the H II region and the PDR. The $`IRAS`$ source associated with the southern CO peak (IRAS 02447+5811) appears to have a small offset from the CO peak to MR-1 as is the case for the northern peak (Fig. 2b). Since the geometry of an ionizing source, ionizing gas, an $`IRAS`$ source, and a molecular cloud is similar to that for the northern peak, IRAS 02447+5811 could also be a PDR associated with Cloud 2. The near-infrared sources IRS 1–5 are located between the H$`\alpha `$ peaks and the molecular cloud peaks near the $`IRAS`$ sources (Figs. 2a, 2b). This geometry suggests that the photoionization of MR-1 triggered the formation of the near-infrared sources in Cloud 2. Thus, the star formation in Cloud 2 seems to be dominated by the single early B-type star MR-1. It is also interesting to consider how a single B-star, MR-1, was formed in the outermost Galaxy, but this is beyond the scope of this paper. ## 4 Conclusion We have conducted a wide field near-infrared search for YSOs associated with Cloud 2 as denoted by Digel et al. (1994). This cloud is one of the most distant molecular clouds from the Galactic center known thus far; the Galactic radius is estimated to be 15–19 kpc (Smartt et al. 1996). Although extended H$`\alpha `$ emission is associated with this cloud, ongoing star-forming activity like that in the nearby star-forming molecular clouds has not been previously reported. We have discovered seven very red near-infrared sources in and around Cloud 2 with wide-field imaging in the $`J`$ (1.25 $`\mu `$m), $`H`$ (1.65 $`\mu `$m), and $`K`$ (2.2 $`\mu `$m) bands. Although foreground clouds in Perseus and local spiral arms are around Cloud 2 on the sky, we could not detect any red sources apart from Cloud 2 within the total surveyed area of roughly 900 arcmin square. Therefore, the detected red sources are very likely to be members of Cloud 2. Most of the sources show large $`HK`$ excess ($`HK`$ $`>`$ 0.8), indicating their YSO nature. We have also obtained a $`K`$-band (1.85–2.45 $`\mu `$m) spectrum of two of the infrared sources, IRS 1 and IRS 2, that are near the two CO peaks in Cloud 2. Strong hydrogen emission lines (Br$`\gamma `$, Br$`\delta `$, and Pa$`\alpha `$) with a slight blueshift were detected for IRS 1, while emission or absorption lines were not detected for IRS 2 within the uncertainty. In view of the cloud association and the emission-line spectrum, we conclude that IRS 1 is a YSO physically associated with Cloud 2. IRS 1 is associated with an $`IRAS`$ point source with an extended feature (IRAS 02450+5816) near the northern CO peak of Cloud 2. This $`IRAS`$ source has a low color temperature ($``$30 K) and is located between an H$`\alpha `$ peak and the CO peak, suggesting it is a photodissociation region. IRS 2 is associated with IRAS 02447+5811 on the southern CO peak of Cloud 2. IRS 3, 4, and 5 are located around this $`IRAS`$ source. The overall distribution of ionized gas, $`IRAS`$ sources, molecular cloud, and near-infrared sources suggests that MR-1, an early B-type star near Cloud 2, has triggered the formation of near-infrared sources in Cloud 2. We are grateful to Mike Nassir, Jim Deane, and Richard Wainscoat, and to the staff of University of Hawaii 2.2 m telescope, for help during our QUIST remote observing run. We thank the UKIRT support scientist John Davis and the UKIRT staff for their kind help during our CGS4 observing run. Special thanks goes to Tom Kerr of UKIRT for providing special processing of the CGS4 data. Lastly, NK thanks Miwa Goto for her useful comments on the first manuscript. NK was supported by a JSPS overseas fellowship.
no-problem/9909/hep-lat9909049.html
ar5iv
text
# hep-lat/9909049UTCCP-P-71UTHEP-409September 1999 Quenched QCD with domain-wall fermions on coarse lattices Talk presented by Y. Aoki at Lattice 99, Pisa, Italy. ## 1 INTRODUCTION The domain-wall fermion formulation of QCD (DWQCD) is expected to realize exact chiral symmetry on the lattice without species doubling at finite lattice spacing. This represents an appealing possibility, particularly for investigations of problems such as weak matrix elements sensitive to chiral symmetry . Therefore, a number of studies have been made . Simulations in DWQCD, however, requires considerable computing power. Even if the size of the extra dimension $`N_s`$ can be taken as small, i.e. $`N_s=O(10)`$, lattice spacings much finer than $`a^12`$ GeV will be difficult to simulate even in quenched QCD. Hence simulations on coarse lattices down to $`a^11`$ GeV will be needed for reliable continuum extrapolations. A first step in DWQCD at such a strong coupling will be to ensure the existence of chiral zero modes. Here we report our preliminary results on this problem. Our study is made for the plaquette action, and also for an RG-improved action to examine the effects of reduced discretization errors. ## 2 PARAMETERS We employ the fermion action identical to that in Ref. , with the domain wall height $`M`$ and bare quark mass $`m_f`$. The gauge coupling is chosen to be $`\beta =5.65`$ for the plaquette action and $`\beta =2.2`$ for the RG action; these values correspond to the scale $`a^11`$ GeV determined from the string tension. Simulations are made on an $`12^3\times 24\times N_s`$ lattice with $`N_s=10,20,30`$ and 50. In the free fermion case the chiral zero mode exists over the range $`0<M<2`$. Since this range will be shifted in the interacting theory, we employ $`M=1.3,1.7,2.1`$ and 2.5. For each of these values of $`M`$, we take $`m_f=0.1,0.05`$ and 0.03 and calculate the pion mass for both degenerate and non-degenerate quark and antiquark pairs. For each parameter point we have typically $`20`$ configurations. ## 3 RESULTS In Fig. 1 the pion mass squared $`m_\pi ^2`$ is plotted as a function of the averaged bare quark mass $`m_f^{av}`$ at $`M=1.7`$ in the case of the plaquette gauge action. Since the linearity of $`m_\pi ^2`$ in $`m_f^{av}`$ is well satisfied, we adopt a linear chiral extrapolation in $`m_f^{av}`$. As it can be seen in the Fig. 1, we find a non-zero value for $`m_\pi ^2`$ at $`m_f^{av}=0`$ for each $`N_s`$. This non-zero value, however, is expected to vanish exponentially as $`N_s\mathrm{}`$. To see whether this occurs, we plot $`m_\pi ^2`$ at $`m_f^{av}=0`$ by thick square symbols in Fig. 2(a) as a function of $`N_s`$. Fitting these points by the form $`\alpha e^{\xi N_s}`$ yields the dotted line with $`\chi ^2/dof=21.4`$. An alternative fit, allowing a constant, $`c+\alpha e^{\xi N_s}`$ gives the solid line with $`\chi ^2/dof=1.7`$ and $`c=0.0532(75)`$. Clearly the latter fit better reproduces the behavior of our data. A similar phenomenon has been previously reported also at $`\beta =5.7`$ . In order to confirm the existence of a non-zero $`c`$, we attempt to interchange the order of the limits $`m_f^{av}0`$ and $`N_s\mathrm{}`$. As shown by solid lines going through open circles in Fig. 2(a), we first make a fit of form $`m_\pi ^2(m_f^{av},N_s)=c^{}(m_f^{av})+\alpha e^{\xi N_s}`$ to make an $`N_s\mathrm{}`$ extrapolation for each value of $`m_f^{av}`$. The results, shown by solid circles in Fig. 1, exhibits a linear behavior in $`m_f^{av}`$. A linear chiral extrapolation $`m_\pi ^2(m_f^{av},N_s=\mathrm{})=c^{}(m_f^{av})=d+\gamma m_f^{av}`$ then yields $`d=0.0531(70)`$, which agrees well with the value of $`c`$ previously obtained. The commutativity of the two limits is summarized in Fig. 2(a), where the two values obtained for $`m_\pi ^2(m_f^{av}=0,N_s=\mathrm{})`$ are marked by “X” and “Y”. The above analyses strongly support the conclusion that $`m_\pi ^2`$ at $`m_f^{av}=0`$ does not vanish in the limit $`N_s=\mathrm{}`$ at $`a^1`$ 1 GeV for the 4-dimensional lattice size of $`12^3\times 24`$. We find this conclusion to apply as well to the RG-improved gauge action, as shown in Fig. 2(b). In Fig. 3 the residual $`m_\pi ^2(m_f^{av}=0,N_s=\mathrm{})`$ is plotted as a function of $`M`$ for the plaquette and RG-improved gauge actions. We observe that a non-vanishing residual remains under the variation of $`M`$ for both actions. An improvement may be seen for the RG action, however, in that the magnitude of the residual is reduced by about a factor of two compared to that for the plaquette action. Finally the decay rate $`\xi `$ extracted from the fits is shown in Fig.4 as a function of $`m_f^{av}`$ at $`M=1.7`$. For both actions, the chiral extrapolation of $`\xi `$ from $`m_f^{av}0`$ (open circles) smoothly agrees with the one directly obtained at $`m_f^{av}=0`$ assuming $`c0`$ (filled circle), but disagrees with the value from the $`c=0`$ fit. Furthermore the values of $`\xi `$ varies little with $`m_f^{av}`$. This is consistent with the expectation that the decay rate is governed by the transfer matrix in the extra dimension, which is independent of $`m_f`$. ## 4 DISCUSSION Our study of domain-wall QCD with the plaquette gauge action has shown that the pion mass in the chiral limit remains non-zero for an infinitely sized extra dimension at a strong gauge coupling corresponding to $`a^1`$ 1 GeV on a $`12^3\times 24`$ lattice. We have also found that this conclusion remains unchanged for the RG-improved gauge action. One possibility for the origin of a non-zero pion mass is finite spatial size effects, which is being checked by increasing the spatial size from 12 to 16. Another possibility is that it is an artifact of the linear chiral extrapolation, which does not take into account the possible presence of quenched chiral logarithms. To explore this possibility, the chiral breaking term in the axial Ward-Takahashi identity , which we expect to be free from the quenched singularities, is currently being investigated. Finally it is possible that no chiral zero modes exist at $`a^1`$ 1 GeV, and therefore the domain-wall formalism fails to realize chiral symmetry in the region of strong coupling. A plausible explanation for this failure might be that the range of $`M`$ for zero modes to exist disappears for stronger couplings, where there is no gap of eigenvalues for the 4-dimensional Wilson operator . This work is supported in part by the Grants-in-Aid of Ministry of Education (Nos. 09304029, 10640246, 10640248, 11640250, 10740107, 11640294, 11740162). SE, TI, KN and YT are JSPS Research Fellows. AAK and TM are supported by the Research for the Future Program of JSPS.
no-problem/9909/cond-mat9909057.html
ar5iv
text
# Order Parameter Suppression in Double Layer Quantum Hall Ferromagnets ## I Introduction During recent years double quantum wells in the quantum Hall regime have been a subject of intensive study. These systems consist of two 2-dimensional electron layers in a perpendicular magnetic field with a distance $`d`$ ($`d`$ 100 Å) between layers comparable to the typical distance between electrons in the same layer. When the magnetic field is strong enough to accommodate all electrons in the lowest Landau level (LLL), interactions between the electrons largely determine the properties of the system. Even when the spin degree of freedom can be ignored because of complete spin alignment, the system exhibits a rich variety of phases associated with the layer index degree-of-freedom and dependent on the difference between interlayer and the intralayer Coulomb interactions. These states are referred to as quantum Hall ferromagnets. In particular, even in the absence of a finite tunneling amplitude, there is *spontaneous* interlayer phase coherence, which lifts the degeneracy between single-particle symmetric states which are occupied and antisymmetric states which are empty . In a mean-field theory this splitting blocks optical absorption in the lowest Landau level at T=0. Absorption is permitted only because of quantum fluctuations, making this probe particularly important. In this paper we present a theory of quantum fluctuations and optical absorption in double-layer quantum Hall ferromagnets. ## II Formalism In the following, we discuss the nature of the many-body ground state wavefunction for such a system in a mean field approximation and systematically improve upon it by including the effect of quantum fluctuations. We present some numerical results and briefly discuss their experimental implications. Let us consider a system at filling factor $`\nu `$=1, neglect the spin dynamics and use the lowest Landau level approximation. It is convenient to describe the layer index degree-of-freedom by a ‘pseudospin’, where the symmetric state corresponds to ‘pseudospin up’ ($`|`$) and the antisymmetric state is ‘pseudospin down’ ($`|`$). Then the interaction between the electrons is a sum of two potentials: a term $`V_0=(V_A+V_E)/2`$, which conserves pseudospin and a term $`V_x=(V_AV_E)/2`$, which reverses the pseudospins of the interacting electrons. ($`V_A`$ and $`V_E`$ are the intralayer and interlayer Coulomb interactions respectively.) We expect the mean-field ground state to be fully pseudospin polarized, with all electrons occupying the symmetric single-particle orbitals. Since the $`V_x`$ term flips pseudospins, however, it is clear that the *exact* ground state must have an indefinite pseudospin polarization. Hence *even at zero temperature*, there must be some mixing of reversed pseudospins in the true ground state. We calculate this mixing by considering the scattering of electrons off of virtually excited collective excitations, pseudospin-waves. The finite temperature expression for the symmetric-state self energy is given by $$\mathrm{\Sigma }_S(i\omega _n)=\frac{1}{\beta }\underset{i\mathrm{\Omega }}{}\underset{a=S,AS}{}𝒢_a^{MF}(i\omega _ni\mathrm{\Omega })M_{Sa,aS}^1(i\mathrm{\Omega }),$$ (1) where $`𝒢_a^{MF}(i\omega _n)=(i\omega _n+\xi _a)^1`$ is the mean-field Matsubara Green’s function and $`M^1`$ is the pseudospin-wave propagator matrix. At zero temperature the symmetric-state self energy becomes $$\mathrm{\Sigma }_S(i\omega _n)=\frac{2\pi l^2}{A}\underset{\stackrel{}{q}}{}\frac{(E_{sw}(\stackrel{}{q})+\mathrm{\Delta })^2}{2E_{sw}(\stackrel{}{q})}\frac{ϵ(\stackrel{}{q})E_{sw}(\stackrel{}{q})}{i\omega _n\xi _{AS}E_{sw}(\stackrel{}{q})}.$$ (2) Here $`E_{sw}(\stackrel{}{q})`$ is the pseudospin-wave energy, $`\xi _{AS}=\xi _S>0`$ is the mean-field energy of the antisymmetric state and $`\mathrm{\Delta }=2\xi _{AS}`$ is the interaction enhanced quasiparticle level splitting between the symmetric and the antisymmetric state energies. For models with delta-function electron-electron interactions, like the Hubbard model which is frequently used for theories of itinerant electron magnetism, self-energy expressions of this form are most efficiently done by using the Hubbard-Stratonovich transformation after formally expresssing the electron-electron interaction as an exchange interaction favoring parallel spin alignment. For double layer systems, however, this transformation is not possible and both Hartree and exchange fluctuations are important in the collective excitation spectrum and the fluctuation physics. To make progress we have derived the above results using a modified version of the Hubbard-Stratonovich transformation designed to cope with this difficulty, which is almost always present in realistic models. In magnetic systems, spin wave energies usually increase monotonically with momentum so that low-energy physics is long-wavelength physics, which in turn can be described by a continuum (effective) field theory. In contrast, for a double layer system, the pseudospin collective mode energies usually have a minimum at $`pl1`$ where $`l`$ is the magnetic length and so an effective field theory discription is not useful. (See fig. 1). We also note that when the distance between the layers vanishes, the pseudospin flipping interaction vanishes, and then the mean-field approximation for the ground state is exact. This is reflected in the vanishing zero temperature self energy expression ( 2) when $`d=0`$. We analytically continue the above self-energy expression to real frequencies and solve the Dyson’s equation, $`\omega \xi _S=\mathrm{\Sigma }_S(\omega +i\eta )`$, numerically to obtain the spectral function for the symmetric state. In the absence of the fluctuation self-energy correction, the thermal Green’s function is given by $`𝒢_S(i\omega _n)=𝒢_S^{MF}(i\omega _n)`$. This corresponds to a spectral function $`A_S(\omega )=\delta (\omega \xi _S)`$; all the spectral weight is in the delta function at the negative energy (occupied) symmetric state quasiparticle pole. When the self energy correction is included, the spectral weight at the symmetric state quasiparticle pole reduces to $`z_S=(1\mathrm{\Sigma }_S/\omega |_\omega ^{})^1`$ where $`\omega ^{}`$ satisfies the Dyson’s equation. The remaining spectral weight is distributed in a continuum piece at positive (unoccupied) energies where the self-energy ( 2) has a branch cut, *i.e.* in the interval $`\xi _{AS}+E_{sw}^{min}\omega \xi _{AS}+\mathrm{\Delta }`$. If excitonic interactions can be neglected the lowest-Landau-level contribution to the optical absorption spectrum is given by the positive energy part of $`A_S(\omega )`$. One index of the amount of spectral weight at positive frequencies is the suppression of the ground-state pseudospin polarization, $$m(T=0)=z_S=_{\mathrm{}}^{\mathrm{}}\frac{d\omega }{2\pi }n_F(\omega )\left[A_S(\omega )A_{AS}(\omega )\right],$$ (3) from it’s mean-field value. ## III Results We have calculated the pseudospin polarization for various values of the two experimentally controlable parameters in these systems, namely the distance between the layers $`d`$ and the tunneling amplitude $`\mathrm{\Delta }_{SAS}`$. (We have neglected the finite thickness of the electron layers to simplify the calculations). A phase diagram showing curves of equal polarization is plotted in fig. 2. The line corresponding to $`z_S=0`$ is the same as the phase boundary between QHE/NO-QHE regions . As we approach the phase boundary, the minimum of the pseudospin-wave energy, $`E_{sw}^{min}`$ (occuring at a finite wavevector) approaches zero, leading to the instability which destroys the pseudospin polarized state and also the quantum Hall effect. The polarization drops to zero rather sharply as a function of $`d`$ for a given $`\mathrm{\Delta }_{SAS}`$ (see fig. 3) since $`E_{sw}^{min}`$ vanishes rapidly with layer separation. The spectral function for $`d=1.4`$ and $`\mathrm{\Delta }_{SAS}=.10`$ is shown in fig. 4. We see that in this case, the pseudospin polarization is reduced by 10% from its maximum value of 1. The nonvanishing spectral function at positive energies reflects the possibility of adding a symmetric electron as an antisymmetric state quasiparticle by destroying a pseudospin-wave present in the ground state. We stress the important role of special properties of the lowest Landau level single-particle states and the absence of band structure in simplifying the calculation described here. The presence, at $`\nu =1`$ and low temperatures, of positive energy symmetric state spectral weight has been detected recently by Manfra and Goldberg in a sample which is close to the quantum Hall boundary. Above theoretical results on the one-electron Green’s function of double-layer systems provide a starting point for interpreting optical absoprtion experiments like those reported in Ref. . If the excitonic effects can be ignored, the optical absorption spectrum is proportional to the portion of the symmetric spectral weight at energies above the Fermi energy. For $`\nu =1`$, the contribution from the lowest Landau level vanishes since the symmetric orbital is occupied. Existing experiments do show a clear evidence of the absorption due to quantum fluctuations expected on the basis of these calculations. Extending the above numerical calculations to take into account the finite thickness of the layers is possible without much difficulty. It will be interesting to see if the predictions for positive energy spectral weight made by this theory are in agreement with experiments as they are refined and as the theory is refined by accounting for excitonic effects. ## acknowledgement This work was supported by NSF grant no. DMR9714055.
no-problem/9909/hep-ph9909568.html
ar5iv
text
# The Inverse Amplitude Method and Heavy Baryon Chiral Perturbation Theory applied to pion-nucleon scattering Talk given at the 8th International Conference on Hadron Spectroscopy, HADRON99, August 24-28, 1999, Beijing, China. Work partially supported by DGICYT under contract AEN97-1693. ## 1 Introduction: Heavy Baryon Chiral Perturbation Theory Chiral Symmetry is a relevant constraint in the interactions between pions and nucleons, as it is well known since current algebra appeared in the sixties. However, in order to go beyond the current algebra results or tree level calculations from simple models, one needs an effective theory with a systematic power counting. Heavy Barion Chiral Perturbation Theory (HBChPT) is an Effective Theory of nucleons and mesons, built as an expansion in small momentum transfer (and meson masses) compared with the typical mass of the nucleons and the chiral symmetry breaking scale of pions interacting in loops, $`\mathrm{\Lambda }_\chi =4\pi f_\pi 1.2`$ GeV, where $`f_\pi `$ is the pion decay constant. When dealing with baryons in an effective Lagrangian context, the main difficulty is that the nucleon four momentum is of the same order as the expansion scale, since its mass is $`m_B\mathrm{\hspace{0.17em}1}\text{GeV}`$ no matter how small is the momentum transfer, and even in the chiral limit . HBChPT overcomes this problem by treating the baryon fields as static heavy fermions consistently with Chiral Symmetry, following the ideas of Heavy Quark Effective Theory . An slightly off-shell baryon momentum can be written as: $$p^\mu =m_Bv^\mu +k^\mu ,\text{with}vkm_B.$$ (1) Then, the Lagrangian $`_v`$ is given in terms of velocity dependent baryon fields $$B_v(x)=\frac{1+\overline{)}v}{2}\mathrm{exp}\left(im_B\overline{)}vv_\mu x^\mu \right)B(x)$$ (2) satisfying now a massless Dirac equation $`\overline{)}B_v=0`$ and whose momenta is $`km_B`$. Lorentz invariance is ensured by integrating with a Lorentz invariant measure, i.e. $`=\frac{dv^3}{2v^0}_v`$. Once this is done, it is possible to find a systematic power counting in $`k/\mathrm{\Lambda }_\chi `$, $`k/m_B`$, $`M/\mathrm{\Lambda }_\chi `$ and $`M/m_B`$, where $`M`$ is the mass of the mesons. Generically we will denote $`M`$ and $`k`$ by $`q`$. In the minimal formulation, only the fields of the pseudoscalar meson octet and the baryon octet are used to build the effective Lagrangian. In other cases, the baryon decuplet is also considered as a fundamental field of the effective Lagrangian. With the vertices of the Effective Lagrangian of a given order, it is possible to calculate Feynman diagrams containing loops. Each loop increases the order of the diagram so that any divergence can be absorbed in the coefficients of higher order operators. It is therefore possible to obtain finite results order by order in HBChPT, but paying the price of more and more chiral parameters. ## 2 $`\pi `$-N scattering in HBChPT Despite its difficulty, there are some one-loop $`O(q^3)`$ HBChPT calculations in the literature . For $`\pi `$-N scattering only four $`O(q^2)`$ and five $`O(q^3)`$ chiral parameters are relevant. In table I we list their estimated values obtained from a fit to the nuclear $`\sigma `$-term, the Goldberger-Treiman discrepancy and ten extrapolated threshold parameters . With this knowledge it was possible to make six predictions of threshold parameters, which is “Neither too impressive nor discouraging” . Let us remark the rather slow convergence of HBChPT, “since the contributions of the first three orders are frequently comparable”. Encouraged by these results and by the success of unitarization techniques in meson-meson scattering , our aim is to extend the applicability of HBChPT to higher energies implementing unitarity. As we have seen, within the HBChPT formalism, and counting $`q_{cm}`$ and $`M`$ as $`O(ϵ)`$, the $`\pi `$-N amplitudes are obtained as a series in the momentum transfer. Customarily $`\pi `$-N scattering is described in terms of partial wave amplitudes of definite isospin $`I`$ and angular momentum $`J`$, that are therefore obtained as $`tt_1+t_2+t_3+O(ϵ^4)`$, where the subscript stands for the power of $`ϵ`$ that each contribution carries. However, an expansion will never satisfy exactly the $`\pi `$-N elastic unitarity condition $$\text{Im}t=q_{cm}|t|^2,$$ (3) although HBChPT satisfies unitarity perturbatively. Indeed, we have checked that $$\text{Im}t_3=q_{cm}|t_1|^2.$$ (4) Unfortunately, unitarity is a very relevant feature of strong interactions and it is fundamental in order to incorporate resonances and their associated poles, which cannot be accommodated in an energy expansion. Note that although there are other $`O(q^3)`$ calculations with an explicit $`\mathrm{\Delta }(1232)`$ which reproduce phase shifts up to $`E_{cm}100\text{MeV}`$ , our aim is the unitarization without including explicit resonance fields. ## 3 The Inverse Amplitude Method and $`\pi `$-N scattering. Dividing Eq.(3) by $`|t|^2`$, the elastic unitarity condition can be recast as $$\text{Im}(1/t)=q_{cm}t\frac{1}{\text{Re}(1/t)iq_{cm}}.$$ (5) Any amplitude in this form satisfies elastic unitarity exactly. Depending on how well we approximate the real part of the Inverse Amplitude, we have different unitarization methods. For instance, setting $`\text{Re}(1/t)=|q_{cm}|\text{cot}\delta =\frac{1}{a}+\frac{r_0}{2}q_{cm}^2`$ we have the effective range approximation. If we simply take $`\text{Re}tt_1`$ we obtain a Lippmann-Schwinger type equation. Finally, if we use the $`O(q^3)`$ HBChPT expansion we arrive at: $$t\frac{t_1^2}{t_1t_2+t_2^2/t_1\text{Re}t_3iq_{cm}t_1^2},$$ (6) which is the $`O(q^3)`$ form of the Inverse Amplitude Method (IAM). Note that if we expand again in terms of $`q`$, we recover at low energies the HBChPT result. Unitarization methods are not foreign to effective theories. Incidentally, eq.(6) is nothing but a Padé approximant of the $`O(q^3)`$ series, and it is well known that Padés, together with very simple phenomenological models are enough to describe the main features of $`\pi `$-N scattering. Although a systematic application with an effective Lagrangian was demanded, it was never carried out (see and references therein). In recent years, with the advent of Chiral Perturbation Theory (ChPT), the IAM has been applied to meson-meson scattering with a remarkable success . In particular,and using the $`O(p^4)`$ ChPT Lagrangians, the IAM reproduces all the channels up to about 1.2 GeV, including the $`\sigma `$, $`f_0`$, $`a_0`$, $`\rho `$, $`\kappa `$, $`K^{}`$ and octet $`\varphi `$ resonances . Very recently, the Lippmann-Schwinger type equation mentioned above has been applied to S-wave kaon-nucleon scattering with eight coupled channels using the lowest order Lagrangian plus one parameter , reproducing all the low energy cross-sections as well as the $`\mathrm{\Lambda }(1405)`$ resonance. ## 4 Results and Summary In Fig.1 we show the preliminary results of an IAM fit to low energy $`\pi `$-N scattering phase shifts. It can be noticed that there is a general improvement and we obtain a fairly good description up to at least the first inelastic threshold for channels even up to 320 MeV of CM energy. Note that the $`\mathrm{\Delta }(1232)`$ resonance in the $`P_{33}`$ channel, has been generated dynamically. Indeed we have found its associated pole in the $`2^{nd}`$ Riemann sheet, at $`\sqrt{s}=1209i\mathrm{\hspace{0.17em}46}\text{MeV}`$. The chiral parameters resulting from the IAM fit are given in table I. Note how different they are from those obtained in . That can be due to several reasons: a) The slow convergence of the series. Indeed we have checked that contributions from different orders are comparable in almost every partial wave at the energies we use. The effect of higher order terms, which was less relevant at threshold, is absorbed in our case in the fit of the parameters. b) Even without unitarization, the values of the parameters have a strong dependence on the observables or the range of energies used to extract them . c) It could also suggest that the $`\mathrm{\Delta }`$ should be included as an explicit degree of freedom in the Lagrangian. d) Also in it is suggested that the data in yields a too large $`\sigma `$ term when analyzed with HBChPT. Further work along these lines is still in progress and a more detailed presentation with additional and more complete results will be presented elsewhere.
no-problem/9909/astro-ph9909003.html
ar5iv
text
# Panel Review: Reconstruction Methods and the Value of 𝛽 ## 1. Introduction The original charge to our panel, which consisted of myself, Marc Davis, Avishai Dekel, and Ed Shaya, was to discuss “Reconstruction Methods.” To narrow the discussion, we decided in advance to concentrate on the problem of deriving $`\beta `$ from comparison of redshift survey and distance indicator data. This seemed appropriate given the panel membership: all of us have worked on this problem, and together we span the spectrum of results, from low values of $`\beta `$ suggestive of a low density ($`\mathrm{\Omega }_M0.2`$) universe to high values consistent with $`\mathrm{\Omega }_M=1.`$ While it may appear that we have strayed from the theme of “reconstructions,” $`\beta `$-determination is, in fact, closely linked with how we reconstruct the underlying velocity and density fields from the observable data. I clarify this point in § 2 below. In § 3, I review recent results for $`\beta `$ from peculiar velocities and consider possible explanations for their wide variance. I conclude in § 5 with a brief look to the future of the subject. ## 2. Reconstructions and $`\beta `$ Our theoretical formulation of cosmic dynamics involves the mass overdensity field $`\delta _\rho (\stackrel{}{r})`$ and the peculiar velocity field $`\stackrel{}{v}_p(\stackrel{}{r}).`$ Linear gravitational instability theory tells us that these fields obey the simple relation $`\stackrel{}{v}_p=f(\mathrm{\Omega }_M)\delta _\rho ,`$ where $`f(\mathrm{\Omega }_M)\mathrm{\Omega }_M^{0.6}.`$ Such continuous fields are abstract mathematical constructs; reality is more complex. Discrete entities—galaxies—sparsely populate the universe, and we measure their redshifts and estimate their distances. Reconstruction Methods are the numerical algorithms we employ to turn these real-world data into representations of the underlying fields $`\delta _\rho `$ and $`\stackrel{}{v}_p.`$ Several obstacles stand between the measurements and successful reconstruction of the density and velocity fields. First, there is biasing: how is the galaxy overdensity $`\delta _g`$ related to the desired $`\delta _\rho \mathrm{?}`$ In the oft-used if oversimplistic linear biasing paradigm $`\delta _g=b\delta _\rho ,`$ and the linear velocity-density relation becomes $`\stackrel{}{v}_p=\beta \delta _g.`$ To the extent linear biasing and linear dynamics hold, only $`\beta ,`$ not $`\mathrm{\Omega }_M,`$ is measureable by this method. Incorporating “realistic” models of biasing—and it is by no means clear that such models exist yet—presents another challenge to reconstruction methods. Reconstruction of $`\stackrel{}{v}_p`$ is if anything an even more daunting task: we estimate only the radial component of this field, at discrete, irregularly distributed positions, with large errors that grow linearly with distance. We cannot invoke the theoretical velocity-density relation, and thus measure $`\beta ,`$ until we have reconstructed the underlying fields. If our reconstructions of $`\delta _g(\stackrel{}{r})`$ and/or $`\stackrel{}{v}_p(\stackrel{}{r})`$ are flawed, we’ll get the wrong value of $`\beta .`$ And given the difficulties such reconstructions face, errors in $`\beta `$ are, perhaps, inevitable at this early stage in our understanding. Figure 1 summarizes the relationship between reconstruction methods and $`\beta `$-determination. The box at the bottom concerns the nature of the comparison by which $`\beta `$ is obtained. If one reconstructs $`\stackrel{}{v}_p`$ from the peculiar velocity data and compares its divergence with $`\delta _g`$ from redshift survey data, one is doing a density-density (d-d) comparison. Alternatively one can reconstruct $`\stackrel{}{v}_p(\stackrel{}{r})`$ from redshift survey data (for an assumed value of $`\beta `$), using the integral form of the linear-velocity density relation, and compare with observed radial peculiar velocity estimates. This is known as the velocity-velocity (v-v) comparison. The distinctions between the two approaches are subtle but important, as discussed below. ## 3. Discrepant $`\beta `$ Values, and Possible Explanations for Them It was not until the early 1990s that measuring $`\beta `$ by comparing peculiar velocity and redshift survey data became a realistic goal. The advent of full-sky redshift surveys—notably those based on the IRAS point source catalog—and large, homogeneous sets of Tully-Fisher (TF) and related data were the key developments. The earliest attempt was that of Dekel et al. (1993), the so-called “POTIRAS” comparison. In this procedure the velocity field is reconstructed using the POTENT algorithm, and its divergence compared with the galaxy density field from IRAS (a d-d comparison). Dekel et al. (1993) found $`\beta _I=1.29,`$<sup>1</sup><sup>1</sup>1Here and below we apply the subscript $`I`$ to $`\beta `$ when the galaxy density field is obtained from IRAS. Other redshift surveys yield different overdensities and thus different $`\beta `$s. a value widely taken as support for the then-popular Einstein-de Sitter paradigm. The POTIRAS analysis has since been redone with much improved peculiar velocity data, with the result $`\beta _I=0.89\pm 0.1`$ (Sigad et al. 1998), still consistent with a critical density universe unless IRAS galaxies are strongly anti-biased. Between the first and second POTIRAS papers, a number of studies, based on the v-v comparison, arrived at markedly lower values of $`\beta _I.`$ Shaya, Tully, & Peebles (1995) estimated $`\mathrm{\Omega }_M=0.17\pm 0.1`$ ($`\beta _I0.35`$ if IRAS galaxies are unbiased) using the Least Action Principle to predict peculiar velocities. Willick et al. (1997) and Willick & Strauss (1998) used the VELMOD method to find $`\beta _I=0.5\pm 0.06.`$ The above results were obtained from TF data; an application of similar methods to more accurate (but far more sparse) supernova data yielded $`\beta _I=0.4\pm 0.1`$ (Riess et al. 1998). (This list is not exhaustive, and apologies are due to authors not cited; my aim is illustrative rather than comprehensive.) Thus, it appears that a somewhat bimodal distribution of $`\beta `$ values has emerged. The d-d, POTIRAS method produces $`\beta _I`$ close to unity; the v-v comparisons, several based on the same redshift and velocity samples as POTIRAS, yield $`\beta _I0.5.`$ Neither the d-d nor the v-v comparison is inherently more valid; both are firmly grounded in linear gravitational instability theory. What, then, could be the cause of the discrepancies? Neither the panelists nor the conference participants arrived at a satistfactory answer to this question. However, the following list of possible explanations proved to be fertile ground for discussion. The list is accompanied by my own (biased) commentary on the salience of each explanation. * Malmquist bias. This famous statistical effect has long been invoked as a cause of discrepant conclusions in astronomy. In fact, it is now a non-issue. Methods are now available for rendering Malmquist and related biases inconsequential; see Strauss & Willick (1995) for a detailed discussion. * Nonuniverality of the TF and other distance-indicator relations. Another non-issue. It is easy to make the general argument—nonuniversal distance indicators imply spurious peculiar velocities—but there is little or no evidence implicating such effects in the $`\beta `$ problem. * Calibration Errors in Peculiar Velocity Datasets. Such errors may indeed be present; new results from the Shellflow project (see the paper by Courteau in these proceedings) suggest that the widely used Mark III catalog has across-the-sky calibration errors. However, these errors affect mainly bulk flow estimates, not the value of $`\beta _I`$ (see Willick & Strauss 1998 for a clear demonstration of this). * Non-trivial biasing. It is now widely believed that biasing is not only nonlinear, but stochastic and nonlocal (see the panel review by Strauss in these proceedings). If so, it is perhaps unsurprising that different approaches produce different results, as they may in fact be measuring different things. This argument may well contain a kernel of truth, but I am not convinced it is the whole story. I was impressed by the work of Berlind, Narayanan, and Weinberg on this subject (see their contributions to the proceedings), which shows that for all but the most contrived models of biasing, the value of $`\beta `$ one obtains is relatively insensitive to methodology. * The density-density versus the velocity-velocity comparison. This, I think, is the central issue. We may be asking too much of our distance indicator data when we use them to derive the full 3-D velocity field and its derivatives, as is required for the d-d comparison. In the v-v comparison, by contrast, the distance indicator data is used essentially in its raw form; only the redshift survey data, which is intrinsically more accurate, is subject to complex, model-dependent manipulation. From this perspective, I would argue that the low values, $`\beta _I0.4`$$`0.5,`$ that have come out of the v-v analyses are more likely to be correct. ## 4. A Quick Look to the Future If we learned anything from our panel discussion, it was the usual lesson: better data will help. Fortunately, some already exist, and more are on the way. The Surface Brightness Fluctuation (SBF) data of Tonry and collaborators (see Tonry’s paper in these proceedings) are now available for comparison with the IRAS redshift data; preliminary results are reported by Blakeslee in these proceedings. SBF distances are considerably more accurate than TF distances and promise a much higher-resolution look at the velocity field. Adam Riess reported new, and also very accurate, SN Ia distances for nearby galaxies. The extant TF data are not likely to increase dramatically in the short term, but will be recalibrated by the Shellflow program, and perhaps the different TF data sets (Mark III, SFI, Tully’s Catalog) will be merged into a larger, homogeneous catalog. In the longer term ($`5`$–10 years), they will be supplanted by much larger and more uniform TF data sets that will emerge from the wide-field infrared surveys currently under way, 2MASS and DENIS (see the contributions by Huchra and Mamon in these proceedings). I draw yet another lesson from our panel discussion: we have not yet fully developed the analytical methods needed to deal with nonlinearities in the universe, both dynamical and biasing-related. In this regard I would encourage theorists to address the problem of, how, given a particular biasing model, do we predict the peculiar velocity field from redshift survey data, taking full account of nonlinear dynamics? When reliable methods are developed for doing this, and tested against N-body simulations, we will be able to fully exploit the improved peculiar velocity data sets of the future. ### Acknowledgments. I thank my fellow panel members, Marc Davis, Avishai Dekel, and Ed Shaya, for stimulating discussions. ## References Dekel, A., Bertschinger, E., Yahil, A., Strauss, M., Davis, M., & Huchra, J. 1993, ApJ, 412, 1 Riess, A.G., Davis, M., Baker, J., & Kirshner, R.P. 1997, ApJ, 488, L1 Shaya, E.J., Peebles, P.J.E., & Tully, R.B. 1995, ApJ, 454, 15 Sigad, Y., Eldar, A., Dekel, A., Strauss, M.A., & Yahil, A. 1998, ApJ, 495, 516 Strauss, M. A., & Willick, J. A. 1995, Phys. Rep., 261, 271 Willick, J.A., Strauss, M.A., Dekel, A., & Kolatt, T. 1997, ApJ, 486, 629 Willick, J.A., & Strauss, M.A. 1998, ApJ, 507, 64
no-problem/9909/cond-mat9909374.html
ar5iv
text
# References SPECTRAL LINE SHAPE OF HIGH–FREQUENCY LOCAL VIBRATIONS IN ADSORBED MOLECULAR LATTICES I. V. Kuzmenko and V. M. Rozenbaum Institute of Surface Chemistry National Academy of Sciences of Ukraine Prosp. Nauki 31, Kyiv-22, 252022 Ukraine E-mail: tanya@ap3.bitp.kiev.ua ## Abstract We consider high–frequency local vibrations anharmonically coupled with low–frequency modes in a planar lattice of adsorbed molecules. The effect of lateral intermolecular interactions on the spectral line shape for local vibrations is analyzed in the limit of the high density of adsorbed molecules. It is shown that the spectral line positions and widths depend on behaviour of low–frequency distribution function for a system of adsorbed molecules. The results obtained allows the spectral line characteristics of the local vibrations for isotopically diluted <sup>13</sup>C<sup>16</sup>O<sub>2</sub> molecules in the <sup>12</sup>C<sup>16</sup>O<sub>2</sub> monolayer on the NaCl(100) surface to be described in agreement with the experimentally measured values. It is well known that the spectral line broadening for high–frequency local vibrations is caused by anharmonic coupling between high– and low–frequency modes \[1–3\]. At a sufficiently high frequency of the local vibrations, the dephasing model proved fruitful for description of vibrational spectra of adsorbed molecules \[4–6\], as it has afforded a number of exact solutions \[7–9\] and it has also allowed consideration of degenerate low–frequency deformation vibrations and their intrinsic anharmonicity . If the amount of adsorbed molecules suffices to form a monolayer, the dephasing model should take into account collectivization of vibrational molecular modes \[11–12\]. The dephasing of collectivized high–frequency vibrations of adsorbates by low–frequency resonance molecular modes not interacting between themselves was studied in Ref. . On the other hand, it is well known that forming of the lattices of inclined-oriented adsorbed molecules, e.g., CO and CO<sub>2</sub> systems on the NaCl(100) surface causes the essential coupling for the low–frequency deformation modes of the adsorbates \[12–14,17,18\]. In the framework of the conventional exchange dephasing model, spectral line shift and width for a local vibration of a single adsorbed molecule are expressed in terms of the anharmonic coupling of this vibration with a translational or orientational molecular vibration with low frequency $`\omega _{\mathrm{}}`$. The last acquires a resonance nature due to its interaction with the quasicontinuous spectrum of crystal lattice phonons . For a planar molecular lattice only the collective low–frequency molecular modes with quasimoment $`k<\omega _{\mathrm{}}/c_T`$ ($`c_T`$ is the transverse sound velocity) are damped via one–phonon emission . If the density of adsorbed molecules is sufficiently high, i.e., provided the inequality $$\frac{\omega _{\mathrm{}}^2}{4\pi n_ac_T^2}<<1$$ is valid ($`n_a`$ is the number of adsorbed molecules per unit area), energy of low–frequency vibration of a molecule is dissipated through lateral intermolecular interactions and interactions of the molecules with substrate are less important. To put it differently, the low–frequency vibration of a molecule decays by emission of vibrational modes of neighbouring molecules. In this paper, we investigate dephasing of local vibrations for a system of anharmonically coupled high– and low–frequency molecular vibrations. It is shown that the experimentally observed spectra of the local vibrations in the planar lattices of adsorbed molecules can be described when lateral interactions between the low–frequency vibrations of the adsorbed molecules are considered. If translational symmetry is characteristic of the system, the harmonic parts of its Hamiltonian can be introduced in a diagonal form with respect to the two–dimensional wave vector $`𝐤`$ belonging to the first Brillouin zone of the planar lattice . Then we represent total Hamiltonian as a sum of harmonic and anharmonic contributions: $`H`$ $`=`$ $`H_0+H_A,`$ (1) $`H_0`$ $`=`$ $`\mathrm{}\mathrm{\Omega }_h{\displaystyle \underset{𝐤}{}}a_𝐤^+a_𝐤+{\displaystyle \underset{𝐤}{}}\mathrm{}\omega _𝐤b_𝐤^+b_𝐤,`$ (2) $`H_A`$ $`=`$ $`{\displaystyle \frac{\mathrm{}\gamma }{N}}{\displaystyle \underset{\mathrm{𝐤𝐤}^{}𝐤^{\prime \prime }}{}}a_𝐤^+a_𝐤^{}b_{𝐤^{\prime \prime }}^+b_{𝐤𝐤^{}+𝐤^{\prime \prime }},`$ (3) where $`a_𝐤`$ and $`a_𝐤^+`$ ($`b_𝐤`$ and $`b_𝐤^+`$) are annihilation and creation operators for the molecular vibrations of the high frequency $`\mathrm{\Omega }_h`$ (low frequency $`\omega _𝐤`$), $`\gamma `$ is the biquadratic anharmonicity coefficient, $`N`$ is the number of molecular lattice sites in the main area. Other types of anharmonic coupling do not contribute to the spectral line broadening \[3–5\] with the exception of additionally renormalizing the biquadratic anharmonicity coefficient , so that the model in question is widely involved in the interpretation of experimental spectra by fitting the parameter values. In the low–temperature limit ($`k_BT<<\mathrm{}\omega _𝐤`$) the spectral line for high–frequency molecular vibrations is of the Lorentz–like shape with the shift of its maximum $`\mathrm{\Delta }\mathrm{\Omega }`$ and width $`2\mathrm{\Gamma }`$ are defined by the following relationships : $$\left(\begin{array}{c}\mathrm{\Delta }\mathrm{\Omega }\\ 2\mathrm{\Gamma }\end{array}\right)=\left(\begin{array}{c}\text{Re}\\ 2\text{Im}\end{array}\right)W,$$ (4) $$W=\frac{\gamma }{N}\underset{𝐤}{}n\left(\omega _𝐤\right)\left[1\frac{\gamma }{N}\underset{𝐤^{}}{}\frac{1}{\omega _𝐤\omega _𝐤^{}+i0}\right]^1,$$ (5) where $`n(\omega )=\left[e^{\beta \mathrm{}\omega }1\right]^1`$ is the Bose–factor, $`\beta =\left(k_BT\right)^1`$. The spectral characteristics described by expressions (4) and (5) are determined by certain function of lateral interaction parameters and of the anharmonicity coefficient $`\gamma `$. Assuming that low–frequency molecular band is sufficiently narrow, that is, the inequality $`\beta \mathrm{}\mathrm{\Delta }\omega <<1`$ is valid, and substituting $`n(\omega _{\mathrm{}})`$ for $`n(\omega )`$ in expression (5), the equation considered takes the form: $$W=\gamma n\left(\omega _{\mathrm{}}\right)\underset{\mathrm{}}{\overset{\mathrm{}}{}}\frac{\varrho (\omega )d\omega }{1\gamma \underset{\mathrm{}}{\overset{\mathrm{}}{}}\varrho (\omega ^{})𝑑\omega ^{}\left[\omega \omega ^{}+i0\right]^1},$$ (6) where $$\varrho (\omega )=\frac{1}{N}\underset{𝐤}{}\delta (\omega \omega _𝐤)$$ (7) is the low–frequency distribution function for a molecular lattice. To investigate the lateral interaction effect on basis spectral characteristics of high–frequency vibrations, assume that the anharmonicity coefficient $`\gamma `$ is sufficiently small. Expanding $`W`$ in powers of $`\gamma `$ and retaining the terms of the orders $`\gamma `$ and $`\gamma ^2`$, we derive the following expressions for spectral line shift and width: $`\mathrm{\Delta }\mathrm{\Omega }`$ $`=`$ $`\gamma n(\omega _{\mathrm{}}),`$ (8) $`2\mathrm{\Gamma }`$ $`=`$ $`{\displaystyle \frac{2\pi \gamma ^2}{\eta _{\text{eff}}}}n(\omega _{\mathrm{}}),`$ (9) where $$\eta _{\text{eff}}=\left[\underset{\mathrm{}}{\overset{\mathrm{}}{}}\varrho ^2(\omega )𝑑\omega \right]^1.$$ (10) Equations (8) and (9) express the spectral line shift and width in terms of the anharmonic coefficient $`\gamma `$ and the parameter $`\eta _{\text{eff}}`$. The last characterizes lateral interactions of low–frequency molecular vibrations. Lateral interactions cause the molecular vibrations to collectivize, that is, a molecular vibration with the frequency $`\omega _{\mathrm{}}`$ transforms into a band of collective vibrations in an adsorbed molecular lattice with nonzero band width $`\mathrm{\Delta }\omega `$. Then assuming that the frequency $`\omega _{\mathrm{}}`$ is located at the centre of the vibrational band, the distribution function (7) can be represented as: $$\varrho (\omega )=\{\begin{array}{c}\mathrm{\Delta }\omega ^1f(z),|\omega \omega _{\mathrm{}}|<\mathrm{\Delta }\omega /2\hfill \\ 0,|\omega \omega _{\mathrm{}}|>\mathrm{\Delta }\omega /2\hfill \end{array}$$ (11) $`\left(z=(\omega \omega _{\mathrm{}})/\mathrm{\Delta }\omega \right)`$ with the dimensionless function $`f(z)`$ determined by the dispersion law for the low–frequency molecular vibrations. By virtue of the fact that the distribution function (7) is normalized to unity, the following equation is valid: $$\underset{1/2}{\overset{1/2}{}}f(z)𝑑z=1.$$ (12) For $`\eta _{\text{eff}}`$ (10) to be calculated, consideration must be given to specific dispersion laws for vibrations induced by anisotropic intermolecular interactions. However the contribution of lateral interactions to spectral characteristics is frequently described by a single parameter, the band width $`\mathrm{\Delta }\omega `$ for collectivized vibrations of adsorbates, with function $`f(z)`$ assumed to be equal to the step function $`\theta \left(14z^2\right)`$. Then Eq. (10) takes the form: $$\eta _{\text{eff}}^{(0)}=\mathrm{\Delta }\omega .$$ (13) Taking account of the distribution function peculiarities leads to the change in the parameter $`\eta _{\text{eff}}`$. Since the function $`f(z)`$ is normalized by Eq. (12), the expression (10) can be rewritten as: $$\eta _{\text{eff}}=\eta _{\text{eff}}^{(0)}\left[1+\underset{1/2}{\overset{1/2}{}}\left(f(z)1\right)^2𝑑z\right]^1.$$ (14) Eq. (14) evidently demonstrate the decrease in the parameter $`\eta _{\text{eff}}`$ if the distribution function differs from the step function. To estimate the parameter $`\eta _{\text{eff}}`$ let us assume the circular first Brillouin zone, $`0<k<k_{\text{max}}`$ ($`k_{\text{max}}=\sqrt{4\pi n_a}`$) and approximate the dispersion law for low–frequency vibrations by the following expression: $$\omega _𝐤=\omega _{\mathrm{}}\frac{\mathrm{\Delta }\omega }{2}\left[12\left(1+a\right)\frac{k}{k_{\text{max}}}+2a\left(\frac{k}{k_{\text{max}}}\right)^2\right],$$ (15) with the arbitrary parameter $`a`$ ($`|a|<1`$). The relationship (15) involves the linear in $`k`$ term, which is in agreement with the dispersion laws for two–dimensional lattice systems constituted by dipole moments . Substituting the expression (15) into Eq. (7) and integrating derived relation with respect to wave vector $`𝐤`$ we obtain the dimensionless distribution function: $$f(z)=\frac{1}{a}\left[\frac{1+a}{\left[\left(1+a\right)^22a(2z+1)\right]^{1/2}}1\right],|z|<\frac{1}{2}.$$ (16) At $`a=1`$ the function (16) reduces to the step function $`\theta \left(14z^2\right)`$. Substitution of the expression (16) into Eq. (10) gives: $$\eta _{\text{eff}}=\mathrm{\Delta }\omega a^2\left[\frac{1+a}{2a}\mathrm{ln}\frac{1+a}{1a}12a\right]^1.$$ (17) Parameter $`\eta _{\text{eff}}`$ varies from $`\mathrm{\Delta }\omega `$ at $`a=1`$ to zero at $`a=1`$. At $`a=0.8`$ (which is consistent with the realistic dispersion laws): $$\eta _{\text{eff}}=0.34\mathrm{\Delta }\omega .$$ (18) In what follows we consider relationship (6) (which is true for arbitrary values of $`\gamma `$ and $`\eta _{\text{eff}}`$), with the density of states for the low–frequency band approximated by the step function with the step width $`\eta _{\text{eff}}`$. Then Eq. (6) takes the form: $$W=\gamma n\left(\omega _{\mathrm{}}\right)\underset{1/2}{\overset{1/2}{}}𝑑z\left[1\frac{\gamma }{\eta _{\text{eff}}}\mathrm{ln}\frac{1+2z}{12z}+\frac{i\pi \gamma }{\eta _{\text{eff}}}\right]^1.$$ (19) The expression (19) allows the lateral interaction parameters to be determined by a comparison between observed and calculated spectral characteristics. An example of adsorbed lattices with strong lateral interactions is the system of isotopically diluted <sup>13</sup>C<sup>16</sup>O<sub>2</sub> molecules in the <sup>12</sup>C<sup>16</sup>O<sub>2</sub> monolayer on the NaCl(100) surface. Stretch vibration frequencies for <sup>13</sup>C<sup>16</sup>O<sub>2</sub> and <sup>12</sup>C<sup>16</sup>O<sub>2</sub> molecules differ by 60 cm<sup>-1</sup> and their coupling can thus be neglected. In contrast, low–frequency vibrations prove to be essentially coupled. As far as translational vibrations are concerned, the mass difference between carbon isotopes is slight compared to the total molecular mass, whereas for orientational vibrations, the mass of the central carbon atom has only a slight effect on the molecular moment of inertia. Temperature dependent shift and width of spectral line of stretch vibrations with the frequency $`\mathrm{\Omega }_h=2281.6`$ cm<sup>-1</sup> at $`T=1`$ K can be approximated in the range from $`T=1`$ K to 80 K by the following expressions: $`\mathrm{\Delta }\mathrm{\Omega }`$ $`=`$ $`0.52n(\omega _{\mathrm{}})\text{cm}^1,`$ (20) $`2\mathrm{\Gamma }`$ $`=`$ $`0.61n(\omega _{\mathrm{}})\text{cm}^1,`$ (21) and $`\omega _{\mathrm{}}=41\text{cm}^1`$. To explain observed spectral characteristics we equate expressions (20) and (21) with the real and imaginary parts of the $`W`$ (19) and obtain two equations in two parameters: $`\gamma `$ and $`\eta _{\text{eff}}`$. Solving this system of equations we have the following parameter values: $`\gamma =0.78\text{cm}^1`$ and $`\eta _{\text{eff}}=3.8\text{cm}^1`$. Using Eq. (18) we obtain the value of effective band width for the low–frequency vibrations: $`\mathrm{\Delta }\omega =11\text{cm}^1`$, which is in good agreement with the band width for collectivized orientational vibrations of CO<sub>2</sub> molecules with quadrupole–quadrupole intermolecular interactions. We note in conclusion that attempts to describe the temperature dependences of the spectral line shape for the above-mentioned CO<sub>2</sub> molecular ensemble in the framework of the conventional exchange dephasing model for a single adsorbed molecule lead to overestimated spectral line width, whereas calculated line shift agrees well with experimental values. Taking into consideration the lateral interactions between the low–frequency vibrations of the adsorbates, the model advanced provides agreement between calculated and observed spectral line shift and width. This research was supported by the State Foundation of Fundamental Researches, administered by the State Committee on Science and Technology of Ukraine (Project 2.4/308). V.M.R. acknowledges also the support of the International Science-Education Program (Grant No QSU 082166).
no-problem/9909/cond-mat9909185.html
ar5iv
text
# Inferring degree of nonextensivity for generalized entropies ## Abstract The purpose of this note is to argue that degree of nonextensivity as given by the Tsallis distribution obtained from maximum entropy principle has a different origin than the nonextensivity inferred from pseudo-additive property of Tsallis entropy. PACS: 05.20.-y, 05.30.-d Keywords: Generalized Tsallis Statistics Tsallis’ nonextensive formalism of statistical mechanics has been designed to treat those systems which cannot be treated within Boltzmann-Gibbs formalism owing to the presence of long-range interactions, spatio-temporal complexity, fractal dynamics and so on. However, compared to the successful number of applications of this formalism, there are relatively fewer papers which clarify the theoretical foundations of the formalism . In this regard, it is important to seek clear guidelines to propose any valid forms for generalized statistical mechanics . One such basis for generlizing Shannon entropy to $`q`$-entropies (including Tsallis entropy) seems to be provided by the connection of Tsallis formalism with $`q`$-calculus or quantum groups. In this paper, we will utilize a general form of Tsallis entropy motivated by this connection and argue that the degree of nonextensivity as manifested by the maximum entropy principle has a different origin than the nonextensivity that is apparent in the pseudo-additive property of Tsallis entropy. It was proposed in that Tsallis entropy generally written as $$S_q^T=\frac{1_{i=1}^W(p_i)^q}{q1},$$ (1) may be given by $$S_q^T=\underset{i=1}{\overset{W}{}}[a_i]p_i,$$ (2) where $$[a_i]=\frac{q^{a_i}1}{q1},$$ (3) and $$a_i=\frac{q1}{\mathrm{ln}q}\mathrm{ln}p_i.$$ (4) In other words, the generalized bit-number can be written as Jackson’s $`q`$-number. It was also argued that Eq. (4) is a tranformation which connects non-commutative differential calculus to $`q`$-calculus. Although this makes Tsallis entropy related to $`q`$-calculus, yet there is no justification from $`q`$-theoretic arguments as to why the variable $`a_i`$ should depend on same $`q`$ parameter as the $`q`$-number $`[a_i]`$. It may well be that Tsallis entropy is a special case of an entropy function where the $`q`$ in Eq. (4) coincides with the parameter $`q`$ of Eq. (3). Motivated by this, we propose to work with the entropy $`S_q=_{i=1}^W[a_i]p_i`$, where from now $`a_i`$ is given by $$a_i=\frac{r1}{\mathrm{ln}r}\mathrm{ln}p_i$$ (5) Note that $`r>0`$. To see the role of parameter $`r`$, we maximize $`S_q`$ under the generalized energy constraint $$\underset{i}{}\epsilon _i(q^{a_i}p_i)=U_q,$$ (6) where $`a_i`$ is given by Eq. (5). Note that for $`r=q`$, the above constraint is equivalnet to $`_i\epsilon _ip_i^q=U_q`$, which has been used earlier for the case of standard Tsallis entropy . Now we study the variation of the function $$\mathrm{\Phi }=S_q\alpha \underset{i}{}p_i\beta \underset{i}{}\epsilon _i(q^{a_i}p_i).$$ (7) The probability distribution obtained by Lagrange multiplier method is given by $$p_i=\frac{\{1(1q)\beta \epsilon _i\}^{\frac{\mathrm{ln}r}{(1r)\mathrm{ln}q}}}{Z_{q,r}},$$ (8) where $`Z_{q,r}`$ is the partition function obtained from normalization condition for $`p_i`$. Note that $`q>0`$, which is also significant for concavity of entropy. From Eq. (8), we see that when $`r=q`$, we get the usual Tsallis distribution $$p_i=\frac{\{1(1q)\beta \epsilon _i\}^{1/(1q)}}{Z_q},$$ (9) Thus the exponent $`1/(1q)`$ above actually arises from the $`a_i`$ part (Eq. (5)) of the definition of Tsallis entropy and not from the parameter of $`q`$-number (Eq. (3)). In fact, it is equally legitimate to work with more general form for $`a_i`$ given by $$a_i=\frac{r1}{\mathrm{ln}q}\mathrm{ln}p_i.$$ (10) Then the exponent of generalized distribution (Eq. (8)) is $`1/(1r)`$. Secondly, the pseudo-additive property of entropy $`S_q=_i[a_i]p_i`$, where $`a_i`$ may depend either on $`r`$ or $`q`$, $$S_q(I+II)=S_q(I)+S_q(II)+(1q)S_q(I)S_q(II),$$ (11) follows directly from the $`q`$-additivity of $`q`$-numbers . This leads us to state that the degree of nonextensivity follows from different premises in the case of maximum entropy principle and pseudo-additive property of Tsallis entropy, respectively.
no-problem/9909/cond-mat9909010.html
ar5iv
text
# Time delay correlations and resonances in 1D disordered systems ## Abstract The frequency dependent time delay correlation function $`K(\mathrm{\Omega })`$ is studied analytically for a particle reflected from a finite one-dimensional disordered system. In the long sample limit $`K(\mathrm{\Omega })`$ can be used to extract the resonance width distribution $`\rho (\mathrm{\Gamma })`$. Both quantities are found to decay algebraically as $`\mathrm{\Gamma }^\nu `$, and $`\mathrm{\Omega }^\nu `$, $`\nu 1.25`$ in a large range of arguments. Numerical calculations for the resonance width distribution in 1D non-Hermitian tight-binding model agree reasonably with the analytical formulas. For a long time one dimensional disordered conductors set an example for understanding the localization phenomenon in real electronic systems. In this communication we consider the reflection of a spinless particle from the disordered region of length $`L`$ and address such quantities as the resonance widths distribution and the time delay correlation function. The issue of time delays and resonances attracted considerable attention in the domain of chaotic scattering and mesoscopics because of relevance for properties of small metallic granulas and quantum dots . An essential progress in understanding of statistical fluctuations of these and related quantities turned out to be possible recently in the context of random matrix description of chaotic motion of quantum particles in the scattering region. Such a description assumes from the very beginning that the time of relaxation on the energy shell due to scattering on impurities (Thouless time) is negligible in comparison with the typical inverse spacing between neighboring energy levels (Heisenberg time). This assumption neglects effects of Anderson localization completely. At the same time the localized states are expected to effect the form of the resonance width statistics and time delay correlations considerably. Difficulties in dealing with the localization analytically in higher dimensions suggest one-dimensional samples with the white noise random potential as a natural object of study. Various aspects of particle scattering in such systems were addressed by different groups recently . Still, the statistical properties of resonances in such systems were not studied to our best knowledge. It is well-known that the level spacing statistics becomes Poissonian with onset of the Anderson localization when the system size $`L`$ far exceeds the localization length $`\xi `$ and eigenstates do not overlap any longer. Notwithstanding the fact that the argumentation is applied to a closed system the same should be true for its open counterpart as well. To exploit this fact we define the two-dimensional spectral density $`\stackrel{~}{\rho }(Z)`$ associated with broadened levels (resonances) $`\stackrel{~}{\rho }(Z)=_n\delta (XReZ_n)\delta (Y+ImZ_n)`$, where $`Z=X+iY`$ is the complex energy, $`Z_n=E_ni\mathrm{\Gamma }_n/2`$ is the coresponding position of the $`n`$-th resonance whose width is $`\mathrm{\Gamma }_n`$. We further define the resonance correlation function as follows: $$\stackrel{~}{\rho }(Z_1)\stackrel{~}{\rho }(Z_2)_c=\stackrel{~}{\rho }(Z_1)\delta ^{(2)}(Z_1Z_2)𝒴_2(Z_1,Z_2)$$ (1) with the brackets standing for the averaging over disorder. The so-called cluster function $`𝒴_2(Z_1,Z_2)`$ reflects eigenvalue correlations and therefore should vanish in the thermodynamic limit $`L\mathrm{}`$. This property provides us with a possibility to relate the averaged density of resonances in the complex plane $`\rho (Z)`$ to the time delay correlation function. We recall that the time delay at a fixed real energy $`E`$ can be written in terms of $`\stackrel{~}{\rho }(X,Y)`$ as (see e.g. ) $$\tau (E)=_0^{\mathrm{}}𝑑X_0^{\mathrm{}}𝑑Y\frac{2Y\stackrel{~}{\rho }(X,Y)}{(EX)^2+Y^2}$$ (2) As far as we restrict ourselves to the thermodynamic limit the correlation of time delays at two different energies $`E+\mathrm{\Omega }`$, and $`E\mathrm{\Omega }`$ can be expressed in terms of the correlation function Eq.(1) with the cluster function neglected. The formula obtained can be further simplified employing the integration over $`X`$ and taking into account that semiclassically $`\mathrm{\Omega },YE`$. In addition we find it to be convenient to rescale the spectral density and introduce the dimensionless resonance widths distribution function $`\rho (y)`$ by means of the relation $$\stackrel{~}{\rho }(X,Y)=2\pi \nu _\xi ^2(X)\rho (2\pi \nu _\xi (X)Y)$$ (3) where $`\nu _\xi (E)\xi (2\pi \sqrt{E})^1`$ is the density of states corresponding to the system of the size $`\xi `$. This is done, the relation between the time delay correlation function and the widths distribution takes the form: $$\begin{array}{ccc}\hfill K(\omega ,\frac{L}{\xi })& =& \mathrm{\Delta }_\xi ^2\tau (E+\omega \mathrm{\Delta }_\xi )\tau (E\omega \mathrm{\Delta }_\xi )_c\hfill \\ & =& _0^{\mathrm{}}𝑑y\frac{y}{\omega ^2+y^2}\rho (y)\hfill \end{array}$$ (4) where $`\mathrm{\Delta }_\xi =(2\pi \nu _\xi )^1`$ standing for the mean level spacing corresponding to a single localization volume is a natural energy scale for measuring the resonance width as well as the time delay correlations. As we will see below Eq.(4) gives us a possibility to restore the density $`\rho (y)`$ from the time delay correlation function. Before we proceed with our analysis it is instructive to develop a simple intuitive picture about the form of the resonance width distribution for a particle reflection from one dimensional disordered sample. We may say qualitatively that this distribution (normalized against the number of states N) is given by $`𝒫(y)`$ $`=`$ $`{\displaystyle \frac{N\xi }{L}}p(y)+{\displaystyle \frac{N}{L}}{\displaystyle _\xi ^L}𝑑x\delta \left(ye^{x/\xi }\right)`$ (5) $`=`$ $`{\displaystyle \frac{N\xi }{L}}\left[p(y)+{\displaystyle \frac{1}{y}}\chi \left(e^{L/\xi }ye^1\right)\right]`$ (6) where $`y=\mathrm{\Gamma }/\mathrm{\Gamma }`$, and , and $`\chi (y)`$ is a characteristic function of the interval. The variable $`x`$ in Eq.(5) stands for the distance from the open edge, and we assume the opposite edge of the sample to be closed for the sake of simplicity. The function $`p(y)`$ is the resonance width distribution corresponding to resonances residing at a distance $`x\xi `$ from the open edge. Other states which are localized in the bulk give rise to exponentially small probability for the particle to escape from the sample and the corresponding width is $`\mathrm{\Gamma }\mathrm{exp}(x/\xi )`$. This is oversimplified picture can give us an idea of $`y^1`$ dependence of the resonance density in a rather wide parametric range. However our explicit calculations presented below demonstrate a slightly different power law: $`y^{1.25}`$. In the present communication we give only a sketch of the calculations, relegating details to a more extended publication. To start with, we consider an elastic scattering of a spinless particle $`\psi (x)`$ on 1D disordered sample. The boundary condition $`\psi (0)=0`$ is imposed at the origin (left edge) and the particle is subject to the white-noise potential $`V(x)`$, $`V(x)=0`$, inside the interval $`(0,L)`$. No potential barrier is assumed at the right edge which corresponds to the perfect coupling between the system and the continuum. The Shrödinger equation on the interval $`(0,L)`$ can be formulated in terms of the logarithmic derivative of the wave function $`z=d\mathrm{ln}\psi /dx`$ : $$\frac{dz}{dx}=V(x)Ez^2$$ (7) where $`V(x)V(y)=\sigma _g\delta (xy)`$, $`E=k^2`$. In what follows we scale the variance in such a way that $`\sigma _g=4E/\xi `$, with $`\xi `$ being the localization length and use the rescaled potential $`v(x)=V(x)/2k`$ with the variance $`1/\xi `$. After the substitution $`z=k\mathrm{cot}\varphi `$ Eq.(7) is reduced to the equation on the phase $`\varphi `$. The phase shift between in- and outgoing waves can be checked to be equal to $`2\varphi `$. Its derivative over the energy is nothing but the corresponding time delay. $$\tau =\frac{1}{k}\frac{d\varphi }{dk}$$ (8) To address time delay correlations at two different energies we have to take into account the set of the following four equations (cf. ): $$\begin{array}{ccc}\hfill \dot{\varphi }_1& =& k+q2v(x)\mathrm{sin}^2\varphi _1\hfill \\ \hfill \dot{\varphi }_2& =& kq2v(x)\mathrm{sin}^2\varphi _2\hfill \\ \hfill \dot{\tau }_1& =& k^12\tau _1v(x)\mathrm{sin}2\varphi _1\hfill \\ \hfill \dot{\tau }_2& =& k^12\tau _2v(x)\mathrm{sin}2\varphi _2\hfill \end{array}$$ (9) where the dot states for the derivative over $`x`$. The first two equations are those for phases each related to its own eigenstate $`E=(k+q)^2`$, and $`E=(kq)^2`$. Two more equations are those for the time delays. They emerge after taking the derivative of the corresponding phases over the energy $`E=k^2`$ and exploiting an approximation $`qk`$. Eqs. (9) contain a multiplicative random force $`v(x)`$ and must be understood in the Stratonovich sense . As usual, significant simplifications occur in semiclassical limit when we restrict ourselves to the large energies: $`kq`$, $`kL1`$ . Last condition means that the joint probability density $`P(\varphi _1,\varphi _2,\tau _1,\tau _2)`$ satisfying a Fokker-Planck equation oscillates rapidly with respect to the phase $`\varphi =\varphi _1+\varphi _2`$. We can average the probability density over this rapid phase in the leading approximation over $`1/kL`$. This averaging can be most efficiently done on the level of the Fokker-Planck equation and, as the result, we obtain a partial differential equation on the density $`P(\eta ,\tau _1,\tau _2)`$ where $`\eta =\varphi _1\varphi _2`$ is a ”slow” variable. This is done, it is more convenient to come back to the level of the Langevin-type equations for only three variables $`\eta `$, $`\tau _1`$, and $`\tau _2`$. Such a procedure requires introducing of two independent random forces $`f_1(x)`$, $`f_2(x)`$ rather then one. The reduced set of the Langevin type equations is found to be $$\begin{array}{ccc}\hfill \dot{\eta }& =& 2q\xi ^1\mathrm{sin}\eta \mathrm{cos}\eta +\mathrm{sin}\eta f_1(x)\hfill \\ \hfill \dot{\tau }_1& =& \frac{1}{k}+\tau _1(\mathrm{cos}\eta f_1(x)\mathrm{sin}\eta f_2(x)\frac{1}{\xi }\mathrm{cos}^2\eta )\hfill \\ \hfill \dot{\tau }_2& =& \frac{1}{k}+\tau _2(\mathrm{cos}\eta f_1(x)+\mathrm{sin}\eta f_2(x)\frac{1}{\xi }\mathrm{cos}^2\eta )\hfill \end{array}$$ (10) The first equation is of the most importance and governs the evolution of the slow phase $`\eta `$. It contains the only random force $`f_1`$: $`f_1(x)f_1(y)=(2/\xi )\delta (xy)`$. The other two equations depend on additional random force $`f_2`$, which is independent of $`f_1`$ and has the same variance $`2/\xi `$. In the initial point $`x=0`$ the variables $`\eta `$, $`\tau _1`$, and $`\tau _2`$ have to be equal zero. The value of the parameter $`q`$ influences the dynamics of the phase $`\eta `$: when $`q`$ is positive (negative) the function $`\eta (x)`$ is also positive (negative). The limit $`q=0`$ leads to the trivial stationary solution $`\eta (x)=0`$ and two equations for $`\tau _1`$, $`\tau _2`$ become equivalent and can be used for calculating the time delay distribution . In a general case $`q0`$ the last two equations still can be solved: $$\tau _{1,2}(L)=\frac{1}{k}\underset{0}{\overset{L}{}}𝑑xe^{_x^L𝑑x^{}(f_1\mathrm{cos}\eta f_2\mathrm{sin}\eta \frac{1}{\xi }\mathrm{cos}^2\eta )}$$ (11) To perform the disorder averaging we have to integrate over the random forces $`f_1`$, $`f_2`$ with the Gaussian measure. When $`q=0`$ the averaging of the product $`\tau _1\tau _2`$ is readily done and leads to the second moment of the time delay distribution: $$\tau ^2=\frac{1}{2\mathrm{\Delta }_\xi ^2}\left(e^{2T}2T1\right);T=\frac{L}{\xi }$$ (12) which increases exponentially with the length of the sample. It is even simpler to check that $`\tau =L/k\mathrm{\Delta }_\xi T`$. When $`q0`$ the averaging of the product $`\tau _1\tau _2`$ over the random forces gives rise to the correlation function (4) where $`\omega =2q\xi `$. The integration over the noise $`f_2`$ is still Gaussian and can be easily done. To perform the other integration we restrict our consideration to the interval $`\eta (0,\pi )`$ because of the obvious periodicity of the solutions and take advantage of the new variable $`g=\mathrm{ln}(\mathrm{tan}\eta /2)`$ to rewrite the first of Eqs.(10) in the following form: $$f_1=\dot{g}V(g);V(g)=2q\mathrm{cosh}g+\xi ^1\mathrm{tanh}g$$ (13) This gives a possibility to replace the integration over $`f_1`$ by that over $`g`$ taking into account the initial condition $`g(0)=\mathrm{}`$ and the corresponding Jacobian: $$𝒥=\mathrm{exp}\left\{\frac{1}{2}_0^L\frac{dV(g)}{dg}𝑑x\right\}$$ (14) The next step is to make use of the formal analogy with the Feynmann path integral of the quantum mechanics. After straightforward calculations we come back to the variable $`\eta `$ and arrive at the time delay correlation function (4) in the following form: $$K(\omega ,T)=2_0^T𝑑t_1_0^{t_1}𝑑t_2R(T,t_1,t_2;\omega )T^2$$ (15) $$\begin{array}{c}R=_0^\pi 𝑑\eta \delta (\eta )e^{t_2\widehat{H}_L}e^{(t_1t_2)\widehat{H}_C}e^{(Tt_1)\widehat{H}_R}1\hfill \\ \widehat{H}_L=\mathrm{sin}^2\eta \frac{d^2}{d\eta ^2}+\omega \frac{d}{d\eta },\hfill \\ \widehat{H}_C=\frac{d}{d\eta }\mathrm{sin}^2\eta \frac{d}{d\eta }+\omega \frac{d}{d\eta },\hfill \\ \widehat{H}_R=\frac{d^2}{d\eta ^2}\mathrm{sin}^2\eta +\omega \frac{d}{d\eta }\hfill \end{array}$$ (16) The action of the differential operators $`\mathrm{exp}\widehat{H}_{L,C,R}`$ applied to the right-hand side is equivalent to finding the solution of the corresponding evolution equations. Unfortunately the eigenstates and eigenfunctions of the operators $`\widehat{H}_{L,C,R}`$ are poorly understood and an exact calculation of the quantity $`R`$ for arbitrary sample length $`L`$ and frequency $`\omega `$ is beyond our possibilities. Still, the approximate solution can be found. It is worth noticing that the equations of this kind are common in the context of one dimensional localization and have been disscussed in the literature . The thorough analysis of Eqs.(15,16) shows that for exponentially small frequencies $`\omega \mathrm{exp}(T)`$ the main contribution to the integral over $`t_1`$, $`t_2`$ in Eq.(15) comes from the region $`t_1t_2T`$. In this situation the time delay correlation function is proportional to $`lim_{\eta 0}\mathrm{exp}(T\widehat{H}_R)1`$. The latter expression is appropriate for the perturbation analysis in $`\omega `$ which allows to present the result in the form of the asymptotic series: $$K(\omega ,T)=\underset{k=0}{\overset{\mathrm{}}{}}\gamma _k(i\omega )^{2k}e^{(2k+1)(2k+2)T}$$ (17) where $`\gamma _k`$ does not depend on $`T`$. With the help of the relation (4) we obtain that the resonance widths are distributed according to the log-normal law in the region of exponentially narrow resonances $`y\mathrm{exp}(T)`$: $$\rho (y)=N^1y^{3/2}\mathrm{exp}\left((4T)^1\mathrm{ln}^2y\right)$$ (18) in agreement with natural expectations . Eqs.(15,16) can be analyzed also in the thermodynamic limit $`T\mathrm{}`$ or large enough frequencies $`\mathrm{exp}(T)\omega `$. The main contribution to the integral Eq.(15) exactly cancels the term $`T^2`$ and the leading correction turns out to be independent on $`T`$. $$K(\omega )=\frac{\pi ^2}{2^8}\underset{\mathrm{}}{\overset{\mathrm{}}{}}𝑑\mu \frac{\mu ^2\mathrm{\Gamma }\left(\frac{3+i\mu }{2}\right)\left(|\omega |/8\right)^{\frac{i\mu 3}{2}}}{\mathrm{\Gamma }^2\left(1+\frac{i\mu }{2}\right)\mathrm{cosh}^2\frac{\pi \mu }{2}}$$ (19) The result of the numerical evaluation of the present integral is shown in Fig.1. In the derivation of the expression above we consider the operators $`\widehat{H}_{R,C,L}`$ acting in the Fourier space $`\mathrm{exp}(2im\eta )`$. The drastic simplifications occur when we let the index $`m`$ to be continous which is the correct approximation for $`|m|1`$. Such a limit should be appropriately matched with that for small $`m`$: $`|m\omega |1`$ . The continous approximation is responsible for an unphysical tail for large frequencies. We expect, however, that Eq.(19) is valid for $`\omega 1`$. It is interesting to note that even though a simple one-pole approximation of the integral (19) gives rise to $`\omega ^1`$ dependence in $`\omega 0`$ limit, such a dependence takes place only for extremely small frequencies: $`\omega 10^8`$. The best fit for the correlation function in a very broad frequency region is $`\omega ^{1.25}`$. Let us derive now the resonance width distribution $`\rho (y)`$ which follows from Eqs.(4,19). Taking into account the identity $$_0^{\mathrm{}}𝑑yy^{\alpha +1}(y^2+\omega ^2)^1=\pi |\omega |^\alpha /(2\mathrm{sin}\pi \alpha /2)$$ (20) we restore the resonance density in the following form: $$\rho (y)=\frac{\pi }{2^7}\underset{\mathrm{}}{\overset{\mathrm{}}{}}𝑑\mu \frac{\mu ^2\mathrm{\Gamma }\left(\frac{3+i\mu }{2}\right)\mathrm{sin}\frac{\pi (i\mu 3)}{4}(y/8)^{\frac{i\mu 3}{2}}}{\mathrm{\Gamma }^2\left(1+\frac{i\mu }{2}\right)\mathrm{cosh}^2\frac{\pi \mu }{2}}$$ (21) This integral shows basically the same features as Eq.(19), (see Fig.2). The resonance widths turn out to be virtually cut at $`\mathrm{\Gamma }\mathrm{\Delta }_\xi /8`$ and the rest part of the plot should not to be taken seriously being an artifact of the approximation used. To check our results we considered the simplest random matrix model which was expected to belong to the same universality class. Namely we diagonalize numerically as many as $`10000`$ tridiagonal matricies of the size $`300300`$ with its diagonal elements being uniformly distributed in the interval $`(1,1)`$. The off-diagonal elements are chosen to be equal to $`1`$. In the same manner as in we can argue that an effect of the open edge can be effectively simulated by adding the imaginary shift $`\gamma =i\pi `$ to last diagonal element of the matrix. We picked up eigenvalues from the center of the spectrum and investigated statistics of their imaginary parts. As shown in the Fig.2 the numerical results agree reasonably with our analytical predictions. In conclusion we address analytically the time delay correlations and resonances for the problem of reflection from 1D disordered sample. We find the resonance density $`\rho (\mathrm{\Gamma })`$ to reveal the log-normal behaviour for the exponentially small widths and the algebraic dependence close to $`\mathrm{\Gamma }^{1.25}`$ in the wide parametric range $`\mathrm{exp}(L/\xi )\mathrm{\Gamma }/\mathrm{\Delta }_\xi 1`$. The time delay correlations are found to demonstrate similar behavior as a function of frequency. We greatfully acknowledge very informative and stimulating discussions with A. Comtet and C. Texier. MT appreciate A. Comtet for kind hospitality extended to him in Orsay, Paris, where the part of this work was done. The work was supported by INTAS Grant No. 97-1342, (MT, YF), SFB 237 ”Disorder and Large Fluctuations” (YF), Russian Fund for Statistical Physics, Grant VIII-2, and RFBR grant No 96-15-96775 (MT).
no-problem/9909/cond-mat9909328.html
ar5iv
text
# Strain in heteroepitaxial growth ## I Introduction Heteroepitaxy is the growth by deposition of one material on another. Since the two materials are different, stress can be generated, and it leads to a number of interesting and important phenomena. In this paper we consider various effects due to this strain by using the embedded atom method (EAM) . We consider the case where the adatoms are “larger” than the substrate atoms. Then if the adlayers are *pseudomorphic* (follow the periodic order of the substrate without dislocations), they have to be compressed. The size difference is not necessarily the only source of stress: the compressive stress of few monolayers of Ag on Pt(111) five times larger than expected from the size difference presumably due to charge transfer. Since the elastic energy of the stressed layer is proportional to its height, the excess elastic energy can, in the course of growth, overcome the barrier of creating a dislocation network for relaxation. Thus pseudomorphic growth cannot be stable for large thickness. However dislocated growth is not the only possibility: the adatoms can form three-dimensional islands instead of normal layer-by-layer growth by merging of two-dimensional islands. The energetic reason for this is that three-dimensional islands can relax: the lattice constant in the majority of an island can be close to its bulk value, and only the bottom of the island is stressed significantly. In case of Volmer–Weber growth, the islands nucleate on the substrate, while in the Stranski–Krastanov case the first few layers grow epitaxially (*wetting layers*), and then islands form. Three-dimensional islands are important for practical applications, as they are a good candidate for lateral electron confinement. Certain semiconductor systems (e.g., InAs on GaAs) develop pseudomorphic three-dimensional structure with a relatively narrow size distribution for the islands . Since this ordering takes place spontaneously during epitaxial growth the islands are called *self-organized quantum dots*. The uniformity of quantum dots and, in particular, the possible narrowing of the island size distribution due to strain, is an important technological issue, and considerable effort has gone into measuring and modeling this effect. A simplified atomic level simulation of strained epitaxial systems has been done by Orr et al. using the dynamic Monte–Carlo method in one dimension. Surface particles were allowed to hop to neighboring sites. The hopping probability depended on both the bond and the local strain. The strain was found by local relaxation after each motion and global relaxation after fixed number of timesteps. The elastic lattice was modeled with harmonic forces between nearest and next-nearest neighbors. Qualitative effects that are observed in experiment are found in this treatment, e.g. the formation of three-dimensional islands whose size distribution depends on growth rate. However, the treatment is very schematic, and contains free parameters. More realistic theories fall in two classes. Empirical approaches take as input certain presumed effects on growth in heteroepitaxy such as the tendency to detach from large two-dimensional islands due to the build-up of strain, the effects of strain on adatom diffusion, and the probability of conversion of a two-dimensional island to a three-dimensional one. However, these parameters are based on a combination of fitting and classical elastic theory with no way to estimate the size of various effects for a real material. Schroeder and Wolf studied the effect of strain on surface diffusion for a Lennard-Jones lattice. This treatment can potentially estimate all the relevant effects. However, the Lennard-Jones potential is quite different from that found in the substances of interest. They observed that the activation barrier is a linear function of strain over a wide range: compressive strain enhances diffusion, while tensile strain hinders it. The strain changed mostly the energy of the saddle point, the stable sites were not much affected. The strain field of a coherent two-dimensional island is not uniform (the edges are more relaxed than the center), therefore this is reflected on the diffusion of adatoms on top of the island. In this paper we use more realistic potentials than Schroeder and Wolf , namely the EAM approximation. Unfortunately, this method is appropriate for metals, not semiconductors, which are the materials of most technological interest. However, these potentials are probably more reliable than the various phenomenological interactions which have been proposed for semiconductors and serve to give perspective on the effects which can be important in growth. Also, this is a computationally intensive approach, and can only treat one island at a time. Thus, we can only calculate parameters which can eventually be inputs into an empirical theory. To this end we investigate effect of strain on the diffusion barrier of adatoms, the detachment energy from a two-dimensional island, and the energy landscape for diffusion nearby. ## II Simulations In our simulations we use a substrate of slab geometry, periodic in the lateral directions, with an open surface at the top bounded by a frozen lattice below. The atoms of the substrate and the adlayers or adatoms were allowed to relax according to the potential described below. We did not introduce dislocations in the substrate. The relaxation was achieved by using conjugate gradient methods. It is necessary to have the elastic part of the substrate as deep as wide, because the elastic effects penetrate roughly isotropically. If the lattice was shallower then the deformation field would be cut off and we would lose long range effects. This restriction has severe consequences on the lattice sizes that are computationally tractable. For an interatomic potential, we used the embedded atom method (EAM) which is believed to give a good representation of transition metals. The form of the potential is $$E_{\mathrm{tot}}=\frac{1}{2}\underset{i}{}\underset{j(i)}{}\varphi ^{(ij)}(R_{ij})+\underset{i}{}F^{(i)}(\rho _i^{\mathrm{host}})$$ (1) The pair potential part, $`\varphi ^{(ij)}(R)`$ is attributed to electrostatic interactions, while the embedding function $`F^{(i)}(\rho ^{\mathrm{host}})`$ is interpreted as the interaction of the ion cores with the free electrons. The explicit form of the functions used in our simulation are given in Ref. . This pseudopotential provides reasonable values for many bulk properties. Whether it is appropriate for surface simulations is not as clear for various reasons. Nevertheless, the EAM is more realistic approach than pair potentials such as Lennard-Jones, and computationally tractable for the necessary system sizes as opposed to first principle calculations. We selected the Ag/Ni system based on its large misfit (16% compression of the Ag adlayer) from the elements available with the EAM potential. This way we could achieve significant stress in the islands on a substrate of relatively small, computationally tractable size: $`32^3`$ in the following calculations. ## III Results We measured the effect of strain on the diffusion barrier. The substrate lattice was compressed in the horizontal direction by a given factor, and was allowed to relax vertically. Then an adatom was placed on top, and the whole system was allowed to fully relax. Fig. 1 shows the energy of the system when a Ag adatom was placed on a stable (fcc), metastable (hcp) and bridge point of a stressed Ag(111) substrate. The diffusion barrier (the difference of the bridge and the stable/metastable energy) is also plotted. Near zero stress the barrier was close to a linear function of the lattice constant, with increasing barrier for tensile strain. This is the expected behavior: under compressive strain the energy landscape becomes more uniform, while under tensile strain the adatom feels more the separate attracting potential of the surface atoms. For large tensile strain this trend breaks down: the surface becomes softer, bringing down bridge energies, resulting in a smaller diffusion barrier. However, this linearity was only over a rather restricted range of strains. Note that we do not reproduce the result of Ratsch *et al.* who used the LDA, and did find linearity of the diffusion barrier with strain, as suggested on phenomenological grounds by Dobbs *et al.* On the other hand, effective medium theory calculations agree with our results, up to a 10 meV systematic shift, see Fig. 1a. When the lattice is unstressed, the fcc adsorption sites are slightly lower in energy than the hcp sites. However, in our calculations this trend reverses for large tensile strain. Note that the major effect here is not on the bridge energies, but on the energies of the stable sites, contrary to the effect found by Schroeder and Wolf . We applied the same procedure to the Ag/Ni(111) heterodiffusion system, the barriers and energies are depicted on Fig. 2. While the behavior of the diffusion barrier is qualitatively the same as in the Ag self-diffusion case, the dependence of energies on strain is different. Around zero stress, the stable sites are unaffected, and the bridge energy changes. From this we can draw the conclusion that whether the energy of the stable sites or the bridge point changes under stress is system dependent, no general statements can be made. One of our goals is to study the elastic effects of an island on the energy landscape observed by the diffusing adatoms. To pursue this we deposited a large hetero-island and an adatom on the substrate, and computed the energy of the system for different positions of the adatom, the configuration is shown on Fig. 3. In Fig. 4 we plot the diffusion barriers of a Ag adatom on top of Ni(111) substrate, as a function of the distance from a Ag island of radius of 4 atoms. There are two different barriers: one seen by an adatom diffusing away from the island, and a different one for approaching it. The oscillation is due to the nature of the lattice: on top of an fcc(111) lattice an adatom can be in the fcc site (stable) or hcp site (metastable). The diffusion barrier is measured between the bridge point and the stable or metastable site. According to the results, near the island it is easier to diffuse away from a stable site, and easier to diffuse inward from a metastable site. The island does not have a strong attractive or repulsive long-range effect on the adatom. However if the adatom is very close, it can only diffuse inwards: it is captured by the island. The small island of the previous result was pseudomorphic with the substrate. For larger islands this is not the case. Fig. 5 shows the diffusion barriers near an island of radius of 7 atoms, which is already not pseudomorphic, as can be seen on Fig. 3. The distortion of the energy landscape is much larger in this case, and the attraction of the island can be felt at larger distances. The effect of the island is not only attraction (the outward barriers larger than the inward ones) but also enhancing diffusion near the island: the diffusion barriers in both directions are decreased. Probably this is due to the fact that the substrate near the compressed island is also compressed. To check that how much of this effect is due to the presence of the compressed hetero-island, we repeated the previous calculation with homoepitaxial island: the large Ag island has been replaced with same size Ni island. The obtained barriers (Fig. 6) show even smaller effect than the case of the small hetero-island. On a considerable range the energy landscape is deformed: the outward and inward directions are not equivalent (as in a sawtooth potential) but there is no global attraction or repulsion. We measured the detachment barrier from a strained island. Fig. 7 shows the binding energy as a function of island size, it is the same as the detachment barrier up to a sign. The trend is decreasing barrier for large enough islands in all cases, as expected. For large islands the detachment barrier of an extra atom at the middle of the hexagonal island’s edge (see Fig. 8b) is smaller than the detachment barrier of the corner atom, or the next atom after the corner. This is plausible, as the extra atom at the middle of the edge is less coordinated than the compared atoms. It has to be noted that the binding energy of the island of radius of 5 is very different compared to the nearby sizes. The explanation is the following. The binding energy is defined as the energy of the island with an adjacent adatom, the zero point is when the adatom is infinitely far away. The island of this size is at the borderline of pseudomorphic and not pseudomorphic islands. When we measured the energy of the island in itself, the relaxation converged to a pseudomorphic state, see Fig. 8a. But when the adatom was added, this was enough perturbation that the system converged to a not pseudomorphic state (Fig. 8b). Thus the addition of the adatom triggered a much lower energy state, hence the large negative bonding energy. It is possible that the bare island also has a lower energy non-pseudomorphic state, but we did not do a detailed search. We also tried to obtain an energy landscape on top of an island. This was quite difficult, because the island atoms are very soft, deform very much in the presence of an adatom on top, and there is no well defined stable, metastable and bridge site. Fig. 9 depicts a case when a the adatom is in a deformed four-fold hollow site. ## IV Summary In this paper we studied the elastic effects of heteroepitaxial islands on diffusion using atomistic simulations with EAM potential. Compressive strain enhances diffusion, small tensile strain hinders it, but large tensile strain also tends to enhance it. Whether the energy of the stable site changes or the bridge energy, depends on the system. The energy landscape near a compressed island is deformed: the island attracts the adatom, and the diffusion is increased near the island. Even a homoepitaxial island deforms the energy landscape, but the change is much smaller, and only the symmetry of the potential is broken. The detachment barrier from a compressed island decreases with larger island size. The diffusion barriers on top of an island are hard to measure, because the island is soft and distorted near an adatom, there is no well defined diffusion path. This is probably due to the fact that we chose to work with a system that dislocates easily. Our general conclusion from this detailed microscopic study is, in some sense, negative. Empirical theories depend on making general statements about the effects of strain which can be modeled with a few parameters. Our study shows that while the qualitative ideas behind these theories are correct – a large two-dimensional island is destabilized by strain, for example – the form of the effect is quite complicated. Also, the representation of the diffusion barriers as linear in the strain is true only over a limited range in our calculations, and in the EMT , while the LDA does give linearity. The complexity of our results may be due to the small sizes of the two-dimensional islands that we were able to deal with, and to the fact that our metallic systems dislocate. Still, we think that these results should serve as a warning against a naive application of continuum elasticity theory in this area. We should note that in Refs. unreasonable assumptions about the size of elastic effects were found to be necessary: Elastic couplings would have to be much larger than any effect that we calculate here in order to give significant narrowing. In our opinion the physical reason for the narrow size distribution of quantum dots is still obscure. We are grateful to Brad Orr for useful discussions. This work is supported by DOE grant DEFG-02-95ER-45546.
no-problem/9909/hep-th9909202.html
ar5iv
text
# 1 Introduction ## 1 Introduction Quantum field theories on non-commutative geometries have received renewed attention recently following the observation that they arise naturally as a decoupled limit of open string dynamics on D-branes . In the formalism of , supersymmetric Yang-Mills theory on non-commutative geometry (NCSYM) arises from Fourier transforming the winding modes of D-branes living in a transverse torus in the presence of NSNS 2-form background . To be concrete, consider a D-string oriented along the 01-plane and localized on a square torus in the 23-plane in the background of $`B_{23}`$. In the absence of $`B_{23}`$, the Fourier transform is equivalent to acting by T-duality in the 23-directions. In the presence of the $`B_{23}`$, however, the Fourier transform (I) and T-duality (II) acts differently. On one hand, (I) gives rise to the NCSYM with non-commutativity scale $$[x_\mu ,x_\nu ]=i\theta _{\mu \nu }.$$ (1.1) On the other hand, (II) gives rise to D3-branes in the NSNS 2-form background. The precise map of degrees of freedom between (I) and (II) is highly non-local and was described in a recent paper as a perturbative series in the non-commutativity parameter $`\theta `$. The physics of (I) at large ’t Hooft coupling can further be related to (II) in the near horizon region in the spirit of the AdS/CFT correspondence . Yet, these equivalences have contributed very little to the understanding of the localized observables in the NCSYM. The difficulty stems largely from the fact that we do not yet understand the encoding of the observables in one formulation in terms of the other with sufficient detail. To study the localized structures, it is natural to introduce localized probes. Topologically stable solution such as a magnetic monopole seems particularly suited for such a task. Instantons on non-commutative space-times have also been studied along this line. In this article, we will study the static properties of magnetic monopoles, dyons, and other related structures in the NCSYM with $`𝒩=4`$ supersymmetry<sup>1</sup><sup>1</sup>1Related 1/2-BPS and 1/4-BPS constant field-strength solutions on tori were discussed in .. Since the non-commutativity modifies the equation of motion for the gauge fields, one must first establish the fact that these solutions exist in the first place. To this end, the equivalence between (I) and (II) will prove to be extremely useful; magnetic monopoles and dyons can be understood in (II) in the language of brane configurations. Masses, charges, and supersymmetries of these objects can be analyzed in the language of (II). The fact that these objects stay in the spectrum of the theory in the decoupling limit provides a strong evidence that objects with the corresponding mass, charge, and supersymmetry exist in the NCSYM. In the language of (II), it is also straightforward to argue for the existence and stability of exotic dyons which arise from three-string junctions and other complicated brane configurations. This paper is organized as follows. We will begin in section 2 by briefly reviewing some basic facts about the NCSYM (I) and how they arise as a decoupling limit. Then we will take the magnetic monopole as a concrete example and study its static properties in the language of (II) in section 3. In section 4, we will describe how the analysis of section 3 can be generalized to $`(p,q)`$-dyons and string junctions. We will conclude in section 5. ## 2 Non-commutative Yang-Mills from String Theory In this section, we will review the string theory origin of the NCSYM. To be specific, let us take our space-time to have 3+1 dimensions. We will not consider the effect of making time non-commutative. Then, without loss of generality, we can restrict our attention to the case where the only non-vanishing component of the non-commutativity parameter is $`\theta _{23}=\theta _{32}=2\pi \mathrm{\Delta }^2`$. ($`\mathrm{\Delta }`$ has the dimension of length.) The NCSYM with coupling $`\widehat{g}_{\mathrm{YM}}`$ and non-commutativity $`\theta _{\mu \nu }`$ is defined by the action $$S=\mathrm{Tr}𝑑x^4\left(\frac{1}{4\widehat{g}_{\mathrm{YM}}^2}\widehat{F}_{\mu \nu }\widehat{F}^{\mu \nu }+\mathrm{}\right)$$ (2.1) where “$`\mathrm{}`$” corresponds to the scalar and the fermion terms, $`\widehat{F}`$ is the covariant field strength $$\widehat{F}_{\mu \nu }=_\mu \widehat{A}_\nu _\nu \widehat{A}_\mu +\widehat{A}_\mu \widehat{A}_\nu \widehat{A}_\nu \widehat{A}_\mu ,$$ (2.2) and the $``$-product is defined by $$f(x)g(x)=e^{i\frac{\theta _{\mu \nu }}{2}\frac{}{x_\mu }\frac{}{x_\nu ^{}}}f(x)g(x^{})|_{x=x^{}}.$$ (2.3) Relevant details about non-commutative geometry and the NCSYM are reviewed in . According to the construction of , this theory is equivalent to D3-branes in the background NSNS 2-form in the $`\alpha ^{}0`$ limit while scaling $$g_\mathrm{s}=\frac{1}{2\pi }\widehat{g}_{\mathrm{YM}}^2\sqrt{\frac{\alpha ^2}{\alpha ^2+\mathrm{\Delta }^4}},V_{23}=\mathrm{\Sigma }_B^2=\frac{\alpha ^2}{\alpha ^2+\mathrm{\Delta }^4}\mathrm{\Sigma }^2,B_{23}=\frac{\mathrm{\Delta }^2}{\alpha ^{}},$$ (2.4) and keeping $`\mathrm{\Delta }`$, $`\mathrm{\Sigma }`$ and $`\widehat{g}_{\mathrm{YM}}`$ fixed. In the presence of D-branes, longitudinally polarized constant NSNS 2-form is not a pure gauge and has the effect of inducing a magnetic flux on the world volume. The magnetic fluxes in this context can be interpreted as the non-threshold bound state of D-strings oriented along the 1-direction. When multiple parallel D3-branes are present, the same number of D-strings get induced on each of the D3-branes. When the 23-directions is compactified on a torus of size $`\mathrm{\Sigma }_B=\alpha ^{}\mathrm{\Sigma }/\mathrm{\Delta }^2`$, the ratio of the number of induced D-strings and the number of D3-branes is precisely $`n_1/n_3=\mathrm{\Sigma }^2/\mathrm{\Delta }^2`$. The map between gauge fields $`\widehat{A}_\mu `$ of the NCSYM (I) and the gauge fields $`A_\mu `$ living on the D1-D3 bound state (II) was constructed in to leading non-trivial order in $`\theta `$, and takes the form $$\widehat{A}_i=A_i\frac{1}{4}\theta ^{kl}\{A_k,_lA_i+F_{li}\}+𝒪(\theta ^2)$$ (2.5) The resummation of this series is not well understood at the present time<sup>2</sup><sup>2</sup>2The higher order corrections to (2.5) were studied recently in .. ## 3 Magnetic Monopoles in NCSYM In this paper, we will study a variety of dyonic states in the NCSYM. It will however be convenient to first study the case of the BPS monopole as a prototype. The analysis for other cases will follow a similar pattern. ### 3.1 Basic notions of the NCSYM monopoles We are interested in studying the properties of the monopole-like objects in the NCSYM (I). To simplify our discussions, we will take our gauge group to be $`SU(2)`$. Some basic properties of the NCSYM action is already manifest. First, the $``$-product acts like an ordinary product for the constant fields in the Cartan subalgebra of the gauge group. Therefore, NCSYM can be Higgsed just like the ordinary SYM. This is important since BPS monopoles exist as a stable state in the Higgsed SYM. Second, if we assume that only the magnetic field and one component of the scalar (say $`\widehat{\mathrm{\Phi }}_9`$) is non-zero, the terms in the action can be assembled into the form $$S=\frac{1}{4\widehat{g}_{\mathrm{YM}}^2}\mathrm{Tr}𝑑x^4\left[ϵ^{ijk}\left(\widehat{F}_{ij}D_k\widehat{\mathrm{\Phi }}+D_k\widehat{\mathrm{\Phi }}\widehat{F}_{ij}\right)+(\widehat{F}_{ij}ϵ_{ij}^{}{}_{}{}^{k}D_k\widehat{\mathrm{\Phi }})(\widehat{F}\text{}^{ij}ϵ^{ijk}D_k\widehat{\mathrm{\Phi }})\right].$$ (3.1) The second term in the integral is positive definite, so the action is bounded below by $$S\frac{1}{4\widehat{g}_{\mathrm{YM}}^2}\mathrm{Tr}𝑑x^4ϵ^{ijk}\left(\widehat{F}_{ij}D_k\widehat{\mathrm{\Phi }}+D_k\widehat{\mathrm{\Phi }}\widehat{F}_{ij}\right)=\frac{1}{2\widehat{g}_{\mathrm{YM}}^2}\mathrm{Tr}𝑑x^4_kϵ^{ijk}\left(\widehat{F}_{ij}\widehat{\mathrm{\Phi }}\right).$$ (3.2) Thus the notion of the BPS bound exists also in the non-commutative theory. Now, by definition, a magnetic monopole solution should have the property that $$\widehat{\mathrm{\Phi }}\frac{U}{2}\sigma ^3$$ (3.3) at large $`r`$, so the bound on the action can be made to take the form $$S=\frac{U}{4\widehat{g}_{\mathrm{YM}}^2}\mathrm{Tr}_{S_2}𝑑S_kϵ^{ijk}\widehat{F}_{ij}\sigma ^3.$$ (3.4) Furthermore, in order for the action to be finite, $`F_{ij}`$ should decay according to $$\widehat{B}^k=\frac{1}{2}ϵ^{ijk}\widehat{F}_{ij}=\frac{x^k\sigma ^3}{2r^3}Q$$ (3.5) at sufficiently large $`r`$ where the system looks spherically symmetric. Therefore, (3.4) is evaluated as $$S=\frac{2\pi Q}{\widehat{g}_{\mathrm{YM}}^2}U.$$ (3.6) In commutative theories, $`Q`$ takes on integer values due to the Dirac’s quantization condition. It is an important question whether there are corrections to $`Q`$ in powers of $`(\mathrm{\Delta }U)`$ for the non-commutative theory. Even in the non-commutative theory, however, the fields are slowly varying for large enough $`r`$, so we expect the standard commutative gauge invariance argument to hold. Therefore, we are lead to conclude that the magnetic monopoles of NCSYM have the same masses and charges as their commutative counterparts. Here we have argued in general terms that a self-dual magnetic monopole solution will saturate the BPS bound and has the same mass and the charge as in the commutative theory, provided that they exist. Unfortunately, the field equations of the non-commutative theory contain an infinite series of higher derivative interactions, making the task of proving the existence, as well as studying the detailed structure of these solutions, a serious challenge. However, even without the detailed understanding of magnetic monopole solutions in NCSYM, the equivalence between (I) and (II) can be exploited to establish some basic properties of these objects. For example, the existence, the stability, the mass, and the supersymmetry of these states can be understood in the language of brane construction in (II). In this formalism, it is also easy to establish similar properties of $`(p,q)`$-dyons and string junctions. These brane constructions provide a strong evidence that the corresponding objects exist in (I). ### 3.2 Brane construction of the NCSYM monopoles In the formalism of the field theory brane constructions, magnetic monopoles in Higgsed SYM have a natural realization as D-strings suspended between a pair of parallel but separated D3-branes. Similar configuration exists in (II) and is a natural candidate for a state which gets mapped to the magnetic monopole of (I) under the relation (2.5). One important difference between (II) and the usual situation is the fact that the background NSNS 2-form $`B_{23}`$ also induces a background RR 2-form $`A_{01}=\frac{1}{g}\sqrt{\frac{B_{23}^2}{1+B_{23}^2}}`$ which couples to the world volume of the suspended D-string . This effect can also be interpreted as the force felt by the magnetic charge at the endpoint of the suspended D-string in the background of constant magnetic field in the 1-direction. The overall effect is to tilt the suspended D-string in the 1-direction and to change the overall energy of the configuration (see Figure 2). The extent of the tilt and the change in the energy can be found by obtaining the minimal energy configuration of the D-string DBI action in the RR 2-form background at weak string coupling $$S=\frac{1}{2\pi \alpha ^{}}_0^{2\pi \alpha ^{}U}𝑑x_9\left(\frac{1}{g_\mathrm{s}}\sqrt{1+\left(\frac{dx_1}{dx_9}\right)^2}+A_{01}\frac{dx_1}{dx_9}\right).$$ (3.7) It is an elementary exercise to show that this expression is minimized for $`dx_1/dx_9=B`$, and that the minimum mass is $$m=\frac{U}{g_\mathrm{s}}\left(\sqrt{1+B^2}\frac{B^2}{\sqrt{1+B^2}}\right)=\frac{2\pi }{\widehat{g}_{\mathrm{YM}}^2}U$$ (3.8) where we used (2.4) to express the result in terms of the parameters of the NCSYM (I). Despite the fact that the suspended D-string was tilted in the 1-direction in response to the background fields, the mass remained exactly the same as in the ordinary SYM. It is also interesting to compute the “non-locality” of the suspended D-string indicated by “$`\delta `$” in Figure 2: $$\delta =\frac{dx_1}{dx_9}2\pi \alpha ^{}U=2\pi \mathrm{\Delta }^2U.$$ (3.9) This length therefore remains constant in the decoupling limit $`\alpha ^{}0`$ in spite of the fact that the slope $`dx_1/dx_9`$ diverges in this limit. It is straightforward to count the number of supersymmetries preserved by this configuration. Let us denote the spinors representing 32 supercharges of type IIB theory by $$ϵ_{}=ϵ_Lϵ_R,ϵ_+=ϵ_L+ϵ_R.$$ (3.10) As we mentioned earlier, D3-branes in the background of $`B_{23}`$ can be thought of as a bound state of $`n_1`$ D-strings and $`n_3`$ D3-branes. Such a configuration places a constraint $$ϵ_{}=\mathrm{\Gamma }^0\mathrm{\Gamma }^1(\mathrm{sin}(\varphi )ϵ_{}+\mathrm{\Gamma }^2\mathrm{\Gamma }^3\mathrm{cos}(\varphi )ϵ_+)$$ (3.11) on the supercharges, where $`\mathrm{tan}(\varphi )=B`$. This result can be easily obtained by following the supersymmetry of $`(p,q)=(n_1,n_3)`$ string through a chain of duality transformations. On the other hand, a D-string tilted in the 19-plane by the angle $`\varphi =\mathrm{tan}^1(B)`$ preserves $$ϵ_{}=\mathrm{\Gamma }^0\mathrm{\Gamma }^\varphi ϵ_{},ϵ_+=\mathrm{\Gamma }^0\mathrm{\Gamma }^\varphi ϵ_+$$ (3.12) where $$\mathrm{\Gamma }^\varphi =\mathrm{\Gamma }^1\mathrm{sin}(\varphi )+\mathrm{\Gamma }^9\mathrm{cos}(\varphi ).$$ (3.13) The two constraints in (3.12) reduces the number of preserved supersymmetries from 32 to 16. It turns out that (3.11) closes among spinors satisfying (3.12), and reduce the number of independent supersymmetries from 16 to 8. Therefore, this brane configuration preserves the same number of supersymmetries as the magnetic monopole of $`𝒩=4`$ SYM. We are interested in the supersymmetry of these states in the field theory limit where we scale $`B=\mathrm{\Delta }^2/\alpha ^{}\mathrm{}`$ keeping $`\mathrm{\Delta }`$ fixed. In this limit linear combinations of (3.11) and (3.12) can be assembled into the following independent set of conditions $`ϵ_{}=\mathrm{\Gamma }^0\mathrm{\Gamma }^1ϵ_{},ϵ_+=\mathrm{\Gamma }^0\mathrm{\Gamma }^1ϵ_+,`$ (3.14) $`ϵ_{}=\mathrm{\Gamma }^9\mathrm{\Gamma }^1\mathrm{\Gamma }^2\mathrm{\Gamma }^3ϵ_+.`$ (3.15) These conditions are satisfied by 8 spinor components, indicating that the magnetic monopole preserves 8 out of 16 supercharges in the field theory limit. The brane configuration described in this section is precisely the S-dual of the configuration considered in , except for the fact that in , it was the D3-brane that was tilted instead of the D-string. The two description can be mapped from one to the other by simply rotating the entire system. Although rotating the branes seem like a trivial operation, it amounts to changing the static gauge condition in the language of DBI action. The fact that this makes implicit reference to the gravitational sector of the theory means that this is not a symmetry in the field theory limit. It is more like a duality transformation mapping equivalent physical system between two descriptions. Let us therefore refer to the tilted D3-brane description as (III). One particular advantage of (III) is the fact that the field configuration corresponding to this brane configuration is easily understood. Thinking of the pair of D3-branes as giving rise to $`U(2)=U(1)\times SU(2)`$ gauge theory, the configuration of Figure 2 is simply the $`F_{23}=\mathrm{\Phi }_9=B`$ embedded into the $`U(1)`$ sector and an ordinary Prasad-Sommerfield monopole embedded into the $`SU(2)`$ sector . The equivalence between (II) and (III) also sheds light on the nature of (II) when expanded in $`\theta `$. When (III) is interpreted as a BIon, the fields are well defined as a single valued function. When (III) is rotated to (II), this single-valuedness is lost. The field configuration must now contain branch cuts to account for multi-valuedness in some region of the D3-brane world volume. Since such a field configuration is non-analytic, expansion in $`\theta `$ is likely not to yield a uniformly converging series, and this may have profound implication for the map between (I) and (II). Especially in light of the fact that (II) seem pathological from many points of view, having a more conventional alternative description (III) may prove to be extremely useful in future investigations. ### 3.3 Magnetic monopoles at large N and large ’t Hooft coupling Before concluding this section, let us pause for a moment and briefly describe what happens to the magnetic monopoles in the NCSYM with large ’t Hooft coupling and large $`N`$. Consider $`SU(N+1)`$ broken to $`SU(N)\times U(1)`$. At large coupling, this $`SU(N)`$ sector is described by the supergravity background and the $`U(1)`$ sector appears as a D3-brane probe in this background. The supergravity background describing the near horizon of the $`N`$ D3-branes in the background of $`B_{23}`$ is given by $`ds^2`$ $`=`$ $`\alpha ^{}\left\{\left({\displaystyle \frac{U^2}{\sqrt{\lambda }}}\right)(dt^2+dx_1^2)+\left({\displaystyle \frac{\sqrt{\lambda }U^2}{\lambda +\mathrm{\Delta }^4U^4}}\right)(dx_2^2+dx_3^2)+{\displaystyle \frac{\sqrt{\lambda }}{U^2}}dU^2+\sqrt{\lambda }d\mathrm{\Omega }^2\right\},`$ $`e^\varphi `$ $`=`$ $`{\displaystyle \frac{\widehat{g}_{\mathrm{YM}}^2}{2\pi }}\sqrt{{\displaystyle \frac{\lambda }{\lambda +\mathrm{\Delta }^4U^4}}},A_{01}={\displaystyle \frac{2\pi }{\widehat{g}_{\mathrm{YM}}^2}}{\displaystyle \frac{\alpha ^{}\mathrm{\Delta }^2U^4}{\lambda }},B_{23}={\displaystyle \frac{\alpha ^{}\mathrm{\Delta }^2U^4}{\lambda +\mathrm{\Delta }^4U^4}},`$ (3.16) where $`\lambda =4\pi \widehat{g}_{\mathrm{YM}}N`$. We wish to find the minimal configuration for the probe D-string action $$S=\frac{1}{2\pi \alpha ^{}}𝑑x_1\left(e^\varphi \sqrt{G_{00}(G_{11}+G_{UU}(U(x_1))^2)}A_{01}\right).$$ (3.17) Near the probe D3-brane, magnetic charge of the D-string will feel the same force as in the case of the flat space, so we impose the boundary condition that $`\alpha ^{}U=\alpha ^{}/\mathrm{\Delta }^2`$ at $`U`$ where we place the probe D3-brane. Rather remarkably, the configuration $$U(x_1)=\frac{1}{\mathrm{\Delta }^2}x_1,$$ (3.18) i. e. a tilted straight line, is a solution to this problem, and when the solution and the background is substituted into (3.17) we find $$S=𝑑x\frac{2\pi }{\widehat{g}_{\mathrm{YM}}^2}\frac{1}{\mathrm{\Delta }^2}=\frac{2\pi }{\widehat{g}_{\mathrm{YM}}^2}U$$ (3.19) which, as expected for a BPS state, is the same mass that we found in the weakly coupled limit. ## 4 (p,q)-Dyons and string junctions in NCSYM In the previous section, we described the interpretation of magnetic monopoles of the NCSYM in the language of (II) and found that they have the same mass as the ordinary SYM. It is extremely straightforward to repeat the analysis of the previous section to the case of $`(p,q)`$-dyons. There will be some qualitative difference in the pattern of supersymmetry breaking which we will discuss below. Once the basic properties of the $`(p,q)`$-dyons are understood, it is natural to consider the possibility of forming a state corresponding to a string-junction . We will examine the existence, the stability, and the supersymmetry of these junction states. ### 4.1 (p,q)-Dyons in NCSYM It is extremely straightforward to generalize the discussion of the previous section to the $`(p,q)`$-dyon. The expression for the action (3.7) is generalized to $$S=\frac{1}{2\pi \alpha ^{}}_0^{2\pi \alpha ^{}U}𝑑x_9\left(\sqrt{p^2+\frac{q^2}{g_\mathrm{s}^2}}\sqrt{1+\left(\frac{dx_1}{dx_9}\right)^2}+qA_{01}\frac{dx_1}{dx_9}\right).$$ (4.1) which is minimized by setting $$\frac{dx_1}{dx_9}=\frac{qB}{\sqrt{(1+B^2)g_\mathrm{s}^2p^2+q^2}}.$$ (4.2) The minimum mass is $$m=\sqrt{\frac{(1+B^2)g_\mathrm{s}^2p^2+q^2}{(1+B^2)g_\mathrm{s}^2}}U=\sqrt{p^2+\frac{4\pi ^2q^2}{\widehat{g}_{\mathrm{YM}}^2}}U$$ (4.3) which is precisely identical to the result one would expect from the ordinary SYM. Let us now investigate the number of preserved supersymmetries for these dyons. For the sake of concreteness, we will first consider $`(p,q)=(1,0)`$, which is a W-boson. As in the previous section, the D3-brane puts the constraint (3.11). The $`(1,0)`$-string, on the other hand, preserves $$ϵ_{}=\mathrm{\Gamma }^0\mathrm{\Gamma }^9ϵ_+.$$ (4.4) For spinors satisfying (4.4), the supersymmetry constraint (3.11) simplifies to $$\left(1\mathrm{\Gamma }^0\mathrm{\Gamma }^1\mathrm{sin}(\varphi )\mathrm{\Gamma }^1\mathrm{\Gamma }^2\mathrm{\Gamma }^3\mathrm{\Gamma }^9\mathrm{cos}(\varphi )\right)ϵ_{}=0.$$ (4.5) Conditions (4.4) and (4.5) are satisfied by 8 independent spinor components for arbitrary values of $`\varphi `$.<sup>3</sup><sup>3</sup>3We thank M. Krogh for pointing out an error regarding this point in the earlier version of this paper. Therefore, we learn that the W-boson in the field theory limit $`\mathrm{tan}(\varphi )=B=\mathrm{\Delta }^2/\alpha ^{}\mathrm{}`$ also preserves 8 supercharges. It is straight forward to extend this analysis to the case of $`(p,q)`$ dyons. Taking the $`(p,q)`$-string to be oriented in the direction given by (4.2), it is easy to obtain a set of independent constraints in a manner similar to the monopole in the last section. The number of the unbroken supersymmetries is 8, and in the decoupling limit, the surviving supersymmetries are specified by (3.14) in addition to the constraint $`\sqrt{p^2+\left({\displaystyle \frac{2\pi q}{\widehat{g}_{\mathrm{YM}}^2}}\right)^2}\mathrm{\Gamma }^9ϵ_{}=\mathrm{\Gamma }^1\left(p+{\displaystyle \frac{2\pi q}{\widehat{g}_{\mathrm{YM}}^2}}\mathrm{\Gamma }^2\mathrm{\Gamma }^3\right)ϵ_+,`$ (4.6) which reduces to (3.15) when $`(p,q)=(0,1)`$. We conclude that in the field theory limit, the $`(p,q)`$-dyons are 1/2 BPS objects, precisely analogous to the situation in the ordinary $`𝒩=4`$ SYM. Just as in the magnetic monopole case, one can consider the analogue of (III) where one tilts the D3-brane in such a way to make the $`(p,q)`$-string point upward. This will simply correspond to embedding the Julia-Zee dyon in the $`SU(2)`$ sector and turning on the $`U(1)`$ part independently. From this standpoint, it is easy to see that the number of preserved supersymmetries is 8. The large $`N`$ and large ’t Hooft coupling limit of the $`(p,q)`$-dyon is also straightforward to analyze. One simply generalizes (3.17) to $$S=\frac{1}{2\pi \alpha ^{}}𝑑x_1\left(\sqrt{p^2+q^2e^{2\varphi }}\sqrt{G_{00}(G_{11}+G_{UU}(U(x_1))^2)}qA_{01}\right).$$ (4.7) The minimal action configuration satisfying the appropriate boundary condition is simply $$U(x)=\frac{x}{\mathrm{\Delta }^2}\sqrt{1+\frac{\widehat{g}_{\mathrm{YM}}^4p^2}{4\pi ^2q^2}},$$ (4.8) and we find the mass of the $`(p,q)`$-dyon to be $$m=U\sqrt{p^2+\frac{4\pi ^2q^2}{\widehat{g}_{\mathrm{YM}}^4}},$$ (4.9) in agreement with the earlier result from weak coupling (4.3). ### 4.2 String junctions in NCSYM Having established the existence and some basic properties of $`(p,q)`$-dyons, it is natural to consider the status of string junctions. In the absence of the background NSNS 2-form, the existence of string junction relied on the property of $`(p,q)`$-strings, that their tension can be balanced $$\underset{i}{}\stackrel{}{T}_{p_i,q_i}=0$$ (4.10) where $$\stackrel{}{T}_{p,q}=(p,\frac{q}{g_\mathrm{s}})$$ (4.11) for $`p_i=q_i=0`$. The components of $`\stackrel{}{T}`$ can be, say, in the 8 and the 9 directions. When the effect of the $`B`$-field is taken in to account, these vectors are rotated out of the 89-plane into the 1-direction. Now one needs to make sure that the tension balance condition is satisfied in the 1, 8, and 9 directions simultaneously. It turns out, however, that the entire effect of the $`B`$-field can be accounted for by rotating the tension vector in the 19-plane so that the (1,8,9) components read $$\stackrel{}{T}_{p,q}=(\frac{q}{g_\mathrm{s}}\mathrm{sin}(\varphi ),p,\frac{q}{g_\mathrm{s}}\mathrm{cos}(\varphi )),\mathrm{tan}(\varphi )=B.$$ (4.12) It is straightforward to verify that this vector is oriented relative to the D3-brane world volume with the appropriate slope (4.2) by rotating $`T_{p,q}`$ in the 89-plane to point in the 19-directions. Since we can just as easily tilt the D3-branes instead of tilting the $`(p,q)`$-strings, there is a version of (III) for the string junction. The fact that the field configuration for such a state is known might prove useful in the same way that the Prasad-Sommerfield solution in (III) is related to the magnetic monopole in the NCSYM (I). Clearly, the condition for sum of $`\stackrel{}{T}_{p,q}`$ to vanish for conserved $`(p,q)`$-charges in a string junction is still valid, so the string junction exists as a stable state in the presence of the $`B`$ field. Though diferent supersymmetries are preserved by the respective component $`(p,q)`$-strings in the string network, in view of this stability the whole configuration is expected to preserve some of the supersymmetries. Let us therefore investigate the field theory limit of these configurations more closely. Consider a junction of strings $`(p_i,q_i)`$, $`i=1,2,3`$, supported by D3-branes localized in the 89-plane with strings meeting at the origin. In order to take the field theory limit of such a configuration, we should scale the distance of the D3-brane to the origin as $`\alpha ^{}U_i`$ with $`\alpha ^{}0`$ and oriented in the $`(p,\frac{q}{g_\mathrm{s}\sqrt{1+B^2}})`$ direction in the 89-plane. In other words, the Higgs expectation value of the $`(\mathrm{\Phi }_8,\mathrm{\Phi }_9)`$ field should be chosen to scale according to $$\stackrel{}{U}_i=(\mathrm{\Phi }_8,\mathrm{\Phi }_9)_i=\frac{U_i}{\sqrt{p_i^2+\frac{q_i^2}{(1+B^2)g_\mathrm{s}^2}}}(p_i,\frac{q_i}{g_\mathrm{s}\sqrt{(1+B^2)}}).$$ (4.13) To take the field theory limit, we scale $`g_\mathrm{s}`$ and $`B`$ according to (2.4). Expressed in terms of $`\widehat{g}_{\mathrm{YM}}`$ and $`\mathrm{\Delta }`$, (4.13) reads $$\stackrel{}{U}_i=(\mathrm{\Phi }_8,\mathrm{\Phi }_9)_i=\frac{U_i}{\sqrt{p_i^2+\frac{4\pi ^2}{\widehat{g}_{\mathrm{YM}}^4}q_i^2}}(p_i,\frac{2\pi }{\widehat{g}_{\mathrm{YM}}^2}q_i)$$ (4.14) and has a trivial $`\alpha ^{}0`$ limit. These junction states therefore appear to exist in the field theory limit and orient itself in the usual way in the 89-plane as we illustrate in Figure 3. Figure 3 does not represent the orientation of the strings outside the 89-plane but it should be remembered that they are tilted in the 19-plane. The mass of the junction takes the same form as in the commutative case $$m=\underset{i=1,2,3}{}\sqrt{p_i^2+\frac{4\pi ^2}{\widehat{g}_{\mathrm{YM}}^4}q_i^2}|\stackrel{}{U}_{p_i,q_i}|.$$ (4.15) The unbroken supersymmetries of the junction in the field theory limit corresponds to the spinor components of the supercharges satisfying the constraints of both the monopoles and the W-bosons, (3.14), (3.15), and (4.4). This can be seen easily from the fact that, since the $`(p_i,q_i)`$-string is now oriented in the direction (4.14) in the 89-plane in the decoupling limit, the constraint for the component $`(p_i,q_i)`$-string becomes $`\left(p_i\mathrm{\Gamma }^8+{\displaystyle \frac{2\pi q_i}{\widehat{g}_{\mathrm{YM}}^2}}\mathrm{\Gamma }^9\right)ϵ_{}=\mathrm{\Gamma }^1\left(p_i+{\displaystyle \frac{2\pi q_i}{\widehat{g}_{\mathrm{YM}}^2}}\mathrm{\Gamma }^2\mathrm{\Gamma }^3\right)ϵ_+,`$ (4.16) as a generalization of (4.6). We conclude, therefore, that objects in the NCSYM corresponding to the field theory limit of the string junctions preserves 4 supercharges, just like their commutative counterparts. ## 5 Conclusions The goal of this paper was to understand the static properties of the magnetic monopole solution and its cousins in the NCSYM. Instead of working with the Lagrangian formulation of NCSYM (I), we took advantage of the equivalence between NCSYM (I) and the decoupling limit of D3-branes in a background NSNS 2-form potential (II) to study the stable brane configurations corresponding to these states. Using this approach, it is extremely easy to show that there are stable brane configurations corresponding to magnetic monopoles, $`(p,q)`$-dyons, and string junctions, and that they survive in the field theory limit. Having established some basic properties of these objects in the language of brane construction, it is natural to wonder how much of this can be understood strictly in the frame work of the Lagrangian formalism. It would be especially interesting to find an explicit solution which generalizes the standard Prasad-Sommerfield solution to the non-commutative setup. It is very encouraging that the construction of instanton solutions via the ADHM method admits a natural non-commutative generalization . Indeed, Nahm’s construction of the magnetic monopole also admits a simple non-commutative generalization. One simply solves for the normalized zero modes of the operator $$0=\widehat{𝚫}^{}\widehat{𝐯}=i\frac{d}{dz}\widehat{𝐯}(z,x)𝐱\widehat{𝐯}(z,x)𝐓\widehat{𝐯}(z,x),$$ (5.1) and computes $$\widehat{A}_i=_{U/2}^{U/2}𝑑z𝐯^{}(z,x)_i𝐯(z,x),\widehat{\mathrm{\Phi }}=_{U/2}^{U/2}𝑑zz𝐯^{}(z,x)𝐯(z,x).$$ (5.2) The non-commutativity is reflected in the $``$-product in (5.1), and as long as $`𝚫^{}𝚫`$ satisfies the usual requirement that it be invertible and that it commutes with the quarternions, all the steps in the argument leading to the self-duality of (5.2) follow immediately from the same argument in the commutative case . Despite tantalizing similarities with the commutative case, we were not able to solve (5.1) in closed form to proceed further. It would be very interesting to see if an explicit expression for the non-commutative BPS monopole can be found. ## Acknowledgments We would like to thank T. Asakawa, N. Itzhaki, I. Kishimoto, M. Krogh, and S. Moriyama for illuminating discussions. A part of this work was carried out at the Summer Institute ’99, Japan, and we thank its participants for providing a stimulating working environment. The work of A. H. is supported in part by the National Science Foundation under Grant No. PHY94-07194. K. H. is supported in part by Grant-in-Aid for Scientific Research from Ministry of Education, Science, Sports and Culture of Japan (#3160).
no-problem/9909/hep-th9909108.html
ar5iv
text
# Contents ## 1 Introduction Historically discrete torsion has been a rather mysterious aspect of string theory. Discrete torsion was originally discovered as an ambiguity in the choice of phases to assign to twisted sectors of string orbifold partition functions. Although other work has been done on the subject (see, for example, ), no work done to date has succeeded in giving any sort of genuinely deep understanding of discrete torsion. In fact, discrete torsion has sometimes been referred to has an inherently stringy degree of freedom, without any geometric analogue. In this paper we shall give a purely geometric understanding of discrete torsion. Specifically, we describe discrete torsion as a precise analogue of orbifold Wilson lines, for 2-form fields rather than vector fields. Put another way, we shall argue that discrete torsion can be understood as “orbifold Wilson surfaces.” Our description of discrete torsion hinges on a deeper understanding of type II $`B`$-fields than is common in the literature. More specifically, just as vector potentials (gauge fields) are understood as connections on bundles, we describe $`B`$-fields as connections on (1-)gerbes. Although gerbes seem to be well-known in some circles, their usefulness does not seem to be widely appreciated. We shall review a recent description of transition functions for gerbes, given in , which provides a simplified language in which to discuss gerbes, and then shall discuss gerbes themselves (in the language of stacks) in detail in . As accessible accounts of gerbes which provide the level of detail we need do not seem to exist, we provide such an overview in . In a later paper we shall provide a simplified way of understanding orbifold group actions on $`B`$ fields, and shall also derive additional physical manifestations of discrete torsion from the ideas presented here. Let us take a moment to give some general explanation of our ideas. In defining an orbifold of a physical theory, the orbifold group $`\mathrm{\Gamma }`$ must define a symmetry of the theory. Specifying the action of the orbifold group on the underlying topological space is, however, not sufficient when bundles or other objects (such as gerbes) are present – specifying an action of the orbifold group $`\mathrm{\Gamma }`$ on a space does not uniquely specify an action of $`\mathrm{\Gamma }`$ on a bundle. Put another way, for any given action of $`\mathrm{\Gamma }`$ on the base space, there can be multiple inequivalent actions of $`\mathrm{\Gamma }`$ on a bundle or a gerbe. This fact is usually glossed over in descriptions of orbifolds. What is the physical meaning of this ambiguity in the lifting of the action of $`\mathrm{\Gamma }`$ to a bundle? Specifying a specific action of $`\mathrm{\Gamma }`$ on a bundle with connection implicitly defines an orbifold Wilson line. In other words, orbifold Wilson lines are precisely choices of actions of $`\mathrm{\Gamma }`$ on a bundle with connection. We shall show that, similarly, discrete torsion is a choice of an action of $`\mathrm{\Gamma }`$ on a (1-)gerbe with connection. Technically, specifying a lift of $`\mathrm{\Gamma }`$ to a bundle, or a gerbe, is known as specifying an “equivariant structure” on the bundle or gerbe. Thus, in the paper we shall often speak of classifying equivariant structures, which means classifying lifts of $`\mathrm{\Gamma }`$. Our results are not restricted to trivial gerbes – in other words, our description of discrete torsion applies equally well to compactifications with nontrivial $`H`$-field strengths. Also, we do not make any assumptions concerning the nature of $`\mathrm{\Gamma }`$ – it does not matter whether $`\mathrm{\Gamma }`$ is abelian or nonabelian. It also does not matter whether $`\mathrm{\Gamma }`$ acts freely on the underlying topological space – in our description, freely-acting orbifolds are understood in precisely the same way as orbifolds with fixed points. We are also able to describe analogues of discrete torsion for the type IIA RR 1-form and the IIB RR 2-form fields. In addition, our approach makes it clear that there should exist analogues of discrete torsion for the other tensor fields appearing in supergravity theories. We describe a specific program for rigorously deriving analogues of discrete torsion for many of the other type II tensor field potentials (specifically, those which can be understood in terms of gerbes), and conjecture the results – that analogues of discrete torsion for $`p`$-form fields are measured by $`H^p(\mathrm{\Gamma },U(1))`$, where $`\mathrm{\Gamma }`$ is the orbifold group. We begin in section 2 by reviewing orbifold Wilson lines in language that will easily generalize. More specifically, we describe orbifold Wilson lines as an ambiguity in lifting the action of an orbifold group to a bundle with connection. In section 3 we give a basic discussion of $`n`$-gerbes, describing how they can be used to understand many of the tensor field potentials appearing in supergravity theories. This discussion is necessary because we shall describe discrete torsion as a precise analogue of orbifold $`U(1)`$ Wilson lines for 1-gerbes. In section 4 we outline precisely how one can derive discrete torsion as an ambiguity in specifying the action of an orbifold group on a 1-gerbe with connection, in other words, as an analogue of orbifold Wilson lines. We do not give a rigorous derivation of discrete torsion in this paper; see instead . Finally, we include an appendix on group cohomology, which is used a great deal in this paper and may not be familiar to the reader. In this paper we concentrate on developing some degree of physical intuition for our results on discrete torsion, and give simplified (and rather loose) derivations. A rigorous derivation of our results on 1-gerbes, together with a detailed description of 1-gerbes, is provided in a separate paper . We should remark that our purpose in writing this paper and was primarily conceptual, rather than computational, in nature. In these papers we give a new conceptual understanding of discrete torsion. Along the way we provide some fringe benefits, such as an understanding of orbifold Wilson lines for nontrivial bundles, a description of discrete torsion in backgrounds with nontrivial torsion<sup>1</sup><sup>1</sup>1Torsion in the sense of, nontrivial 3-form curvature, as opposed to the mathematical senses., and a thorough pedagogical review of 1-gerbes in terms of sheaves of categories. However, we do not provide significant new computational methods. We should mention that there are a few issues concerning discrete torsion which we shall not address in either this paper or . First, we shall only discuss discrete torsion for orbifold singularities. One might wonder if discrete torsion, or some close analogue, can be defined for non-orbifold singularities, such as conifold singularities; we shall not address this matter here. Second, we shall not attempt to discuss how turning on discrete torsion alters the moduli space structure, i.e., how discrete torsion changes the allowed resolutions of singularities. See instead for a preliminary discussion of this matter. These matters will be discussed in . We should also mention that in an earlier version of this paper, we made a slightly stronger claim than appears here. Namely, in earlier versions of this paper we claimed that the difference between any two equivariant structures on a 1-gerbe with connection is defined by an element of $`H^2(\mathrm{\Gamma },U(1))`$. However, we have since corrected a minor error in . Here, we only claim that the difference between equivariant structures is an element of a group which includes $`H^2(\mathrm{\Gamma },U(1))`$ – in other words, there are additional degrees of freedom which we missed previously. These additional degrees of freedom will be discussed in more detail in . ## 2 Orbifold Wilson lines We shall begin by making a close examination of principal bundles and heterotic string orbifolds, in order to carefully review the notion of an “orbifold Wilson line.” In the next section, we describe gerbes, which provide the generalization of line bundles required to describe higher rank tensor potentials in supergravity theories. Once we have given a basic description of gerbes, we shall describe how the notion of orbifold Wilson line generalizes to the case of gerbes, and in the process, recover discrete torsion (and its analogues for the other tensor potentials appearing in supergravity theories) as a precise analogue of an orbifold Wilson line for a gerbe. Let $`X`$ be a smooth manifold, and $`\mathrm{\Gamma }`$ a discrete group acting by diffeomorphisms on $`X`$. In this section we shall discuss how to extend the action of $`\mathrm{\Gamma }`$ to a bundle on $`X`$ and to a connection on the bundle. We shall explicitly recover, for example, the orbifold Wilson lines that crop up in toroidal heterotic orbifolds. In the next section we shall generalize the same methods to describe discrete torsion as an analogue of orbifold Wilson lines for higher-degree gerbes. ### 2.1 Basics of orbifold Wilson lines Before we begin discussing orbifold Wilson lines in mathematical detail, we shall take a moment to discuss them in generality. We should point out that in this paper we will always assume that gauge bundles in question are abelian; that is, any principal $`G`$-bundle appearing implicitly or explicitly has abelian $`G`$. In constructing heterotic orbifolds, people often mention orbifold Wilson lines. What are they? In constructing a heterotic toroidal orbifold, one can combine the action of $`\mathrm{\Gamma }`$ on $`X`$ with a gauge transformation, so as to create Wilson lines on the quotient space. The action on the gauge bundle defines the lift of $`\mathrm{\Gamma }`$ to the bundle. Such Wilson lines are typically called orbifold Wilson lines. The simple description above works precisely in the case that the bundle being orbifolded is trivial. In this case, there is a canonical (in fact, trivial) lift that leaves the fibers invariant, and any other lift can be described as combining a gauge transformation with the action of $`\mathrm{\Gamma }`$ on the base. In general, when the bundle is nontrivial, there is no canonical lift, and so one has to work harder. A specification of a lift of $`\mathrm{\Gamma }`$ to a bundle is known technically as a choice of “equivariant structure” on the bundle, and so to derive orbifold Wilson lines in the general context we will speak of classifying equivariant structures. We shall study equivariant structures in much more detail in the next subsection. In the remainder of this subsection, we shall attempt to give some intuition for the relevant ideas. Consider for simplicity the special case of an orbifold group $`\mathrm{\Gamma }`$ acting freely (without fixed points) on a space $`X`$. How precisely do we describe a Wilson line on the quotient space? Let $`xX`$, and pick some path from $`x`$ to $`gx`$ for some $`g\mathrm{\Gamma }`$. In essence, a Wilson loop on the quotient space $`X/\mathrm{\Gamma }`$ is the composition of the (nonclosed) Wilson loop along this path from $`x`$ to $`gx`$ with a gauge transformation describing the action of $`g\mathrm{\Gamma }`$ on the corresponding principal bundle. It should be clear from this description that equivariant structures on a bundle are encoding information about Wilson lines on the quotient space, among other things. For this reason, choices of equivariant structures are often called orbifold Wilson lines. How should orbifold Wilson lines be classified? Again, for simplicity assume $`\mathrm{\Gamma }`$ acts freely on $`X`$. We shall examine how flat connections on the quotient space are related to flat connections on the cover, in order to shed some light. (In later sections we shall not assume that bundles under consideration admit flat connections; we make this assumption here in order to perform an enlightening calculation.) First, recall that for any $`G`$, the moduli space of flat $`G`$-connections on $`X/\mathrm{\Gamma }`$, for abelian $`G`$, is given by $$\text{Hom}(\pi _1(X/\mathrm{\Gamma }),G)/G$$ where $`G`$ acts by conjugation. For abelian $`G`$, conjugation acts trivially, and so the moduli space of flat $`G`$-connections on $`X/\mathrm{\Gamma }`$ is simply $$\text{Hom}(\pi _1(X/\mathrm{\Gamma }),G)$$ Thus, in order to study orbifold Wilson lines on $`X`$, we need to understand how $`\pi _1(X/\mathrm{\Gamma })`$ is related to $`\pi _1(X)`$. Assuming $`\mathrm{\Gamma }`$ is discrete and $`X`$ is connected, then from the long exact sequence for homotopy<sup>2</sup><sup>2</sup>2Applied to the principal $`\mathrm{\Gamma }`$ bundle $$\mathrm{\Gamma }XX/\mathrm{\Gamma }$$ whose existence follows from the fact that $`\mathrm{\Gamma }`$ acts freely. we find the short exact sequence $$0\pi _1(X)\pi _1(X/\mathrm{\Gamma })\pi _0(\mathrm{\Gamma })\mathrm{\hspace{0.25em}0}$$ so $`\pi _1(X/\mathrm{\Gamma })`$ is an extension of $`\pi _0(\mathrm{\Gamma })\mathrm{\Gamma }`$ by $`\pi _1(X)`$. As $`\pi _1(X/\mathrm{\Gamma })`$ receives a contribution from $`\mathrm{\Gamma }`$, we see that orbifolding enhances the space of possible Wilson lines by $`\text{Hom}(\mathrm{\Gamma },G)`$, roughly speaking. More precisely, we have the long exact sequence $$0\text{Hom}(\mathrm{\Gamma },G)\text{Hom}(\pi _1(X/\mathrm{\Gamma }),G)\text{Hom}(\pi _1(X),G)\mathrm{}$$ For example, for the special case $`G=U(1)`$, we have the short exact sequence<sup>3</sup><sup>3</sup>3Using the fact that $`U(1)=𝐑/𝐙`$ is an injective $`𝐙`$-module \[12, section I.7\]. $$0H^1(\mathrm{\Gamma },U(1))\text{Hom}(\pi _1(X/\mathrm{\Gamma }),U(1))\text{Hom}(\pi _1(X),U(1))\mathrm{\hspace{0.25em}0}$$ where $`H^1(\mathrm{\Gamma },U(1))`$ denotes group cohomology of $`\mathrm{\Gamma }`$ with trivial action on the coefficients $`U(1)`$. Thus, we see explicitly that for $`\mathrm{\Gamma }`$ discrete and freely-acting, flat $`U(1)`$-connections on the quotient space pick up a contribution from the group cohomology group $`H^1(\mathrm{\Gamma },U(1))`$, which we can identify with orbifold $`U(1)`$ Wilson lines. The results of the discussion above are important and bear repeating. We just argued that, for $`\mathrm{\Gamma }`$ discrete and freely-acting, flat $`U(1)`$ connections on the quotient get a contribution from $`H^1(\mathrm{\Gamma },U(1))`$. We shall argue in later sections that for general abelian $`G`$ and general discrete $`\mathrm{\Gamma }`$ (not necessarily freely-acting), orbifold $`G`$ Wilson lines are classified by $`H^1(\mathrm{\Gamma },G)`$, where $`H^1(\mathrm{\Gamma },G)`$ denotes group cohomology of $`\mathrm{\Gamma }`$, with coefficients<sup>4</sup><sup>4</sup>4Technically, we are also assuming that the action of $`\mathrm{\Gamma }`$ on the coefficients $`G`$ is trivial. We shall make this assumption on group cohomology throughout this paper. $`G`$. In later sections we shall also not make any assumptions concerning the nature of the bundle – we shall not assume the bundle in question admits flat connections. We shall rigorously derive the classification of orbifold Wilson lines as a classification of equivariant structures on principal bundles with connection. When we classify equivariant structures on gerbes with connection<sup>5</sup><sup>5</sup>5And band $`C^{\mathrm{}}(U(1))`$, technically., we shall recover a classification which includes $`H^2(\mathrm{\Gamma },U(1))`$. At this point we shall take a moment to clarify an issue that may have been puzzling the reader. We claimed in the introduction that we would describe discrete torsion in terms of orbifold Wilson lines for $`B`$ fields. However, discrete torsion is measured in terms of group cohomology, whereas (for flat connections) Wilson lines are given by $`\text{Hom}(\pi _1,G)/G`$. However, for the special case $`G=U(1)`$, $$\text{Hom}(\pi _1,G)/G=H^1(\mathrm{\Gamma },U(1))$$ where $`H^1(\mathrm{\Gamma },U(1))`$ is group cohomology. It should now be clear to the reader that the usual classification of discrete torsion – given by $`H^2(\mathrm{\Gamma },U(1))`$ – is quite similar to this formal classification of orbifold $`U(1)`$ Wilson lines – given by $`H^1(\mathrm{\Gamma },U(1))`$. In particular, the reader should now be less surprised that orbifold Wilson lines and discrete torsion are related. One issue we have glossed over so far concerns “fake” Wilson lines, which we shall now take a moment to discuss. Consider for example the orbifold $`𝐂^2/𝐙_2`$. This space is simply-connected, yet the usual prescriptions for orbifold Wilson lines tell us that there is a physical degree of freedom (given by $`\text{Hom}(𝐙_2,G)`$) which we would usually associate with Wilson lines. Such degrees of freedom are often referred to as fake Wilson lines . This degree of freedom is in fact physical – not some unphysical artifact. In the next few sections we shall see mathematically that one will recover degrees of freedom measured by $$H^1(\mathrm{\Gamma },G)=\text{Hom}(\mathrm{\Gamma },G)$$ for $`\mathrm{\Gamma }`$-orbifolds of spaces with $`G`$-bundles (with $`G`$ abelian), regardless of whether or not $`\mathrm{\Gamma }`$ is freely acting. How precisely should fake Wilson lines be interpreted on the quotient space? It can be shown \[18, chapter 14\] that if one quotients the total spaces of bundles, using equivariant structures defining fake Wilson lines, then the resulting object over the quotient space is not a fiber bundle. (For example, a $`𝐙_2`$ orbifold of a rank $`n`$ complex vector bundle over $`𝐂^2`$ with nontrivial orbifold Wilson lines is not a fiber bundle over the quotient space $`𝐂^2/𝐙_2`$ – one gets an object whose fiber over most points is $`𝐂^n`$, appropriate for a rank $`n`$ vector bundle, but whose fiber over the singularity is $`𝐂^n/𝐙_2`$.) We have not pursued this question in depth, but we do have a strong suspicion. In the case that $`X/\mathrm{\Gamma }`$ is an algebraic variety<sup>6</sup><sup>6</sup>6 Technically, a Noetherian normal variety., it is possible to construct (reflexive) sheaves which are closely related to, but not quite the same as, bundles. For example, on $`𝐂^2/𝐙_2`$, in addition to line bundles there are also reflexive rank 1 sheaves. We find it very tempting to conjecture that these reflexive rank 1 sheaves correspond to quotients of equivariant line bundles on $`𝐂^2`$ with nontrivial fake Wilson lines, and that more generally, fake Wilson lines on quotient spaces that are algebraic varieties have an interpretation in terms of reflexive sheaves which are not locally free. Moreover, isomorphism classes of reflexive sheaves on affine spaces $`𝐂^2/\mathrm{\Gamma }`$ are classified in the same way as orbifold Wilson lines , a fact that forms one corner of the celebrated McKay correspondence. We shall have nothing further to say on this matter in this paper. Before we move on to discuss lifts $`\stackrel{~}{\mathrm{\Gamma }}`$ of $`\mathrm{\Gamma }`$ acting on line bundles with connection, we shall discuss some amusing technical points regarding equivariant bundles. One natural question to ask is the following: given some equivariant structure on a bundle $`P`$, how can one compute the characteristic classes of the quotient bundle $`P/\mathrm{\Gamma }`$? The basic idea is to construct a principle $`G`$-bundle on $`E\mathrm{\Gamma }\times _\mathrm{\Gamma }X`$, such that the projection to $`X/\mathrm{\Gamma }`$ yields the quotient bundle. We shall not work out the details here; see instead , where this program is pursued in detail. In principle one could follow the same program for the equivariant gerbes we shall construct in later sections, and discuss their equivariant characteristic classes. However, we shall not pursue this direction in this paper. In passing we should also note that on rare occasions, equivariant bundles and equivariant bundles with connection have been discussed in the physics literature in terms of “V-bundles” . The language of V-bundles is rather different from the language we shall use in this paper to describe orbifold Wilson lines, though it is technically equivalent. ### 2.2 Equivariant bundles Let $`P`$ be a principal $`G`$-bundle on $`X`$ for some abelian Lie group $`G`$ (e.g., $`G=U(1)^n`$ for some positive $`n`$). Given the action of $`\mathrm{\Gamma }`$ on $`X`$, we would like to study lifts of the action of $`\mathrm{\Gamma }`$ on $`X`$ to the total space of $`P`$. What precisely is a lift of the action of $`\mathrm{\Gamma }`$? Let $`\pi :PX`$ denote the projection, then a lift of the action of an element $`g\mathrm{\Gamma }`$ is a diffeomorphism $`\stackrel{~}{g}:PP`$ such that $`\stackrel{~}{g}`$ is a morphism of principal $`G`$-bundles. The statement that $`\stackrel{~}{g}`$ is a morphism of bundles means precisely that the following diagram commutes: $$\begin{array}{ccc}P& \stackrel{\stackrel{~}{g}}{}& P\\ \text{ }\pi \text{ }& & \text{ }\pi \text{ }\\ X& \stackrel{g}{}& X\end{array}$$ (1) The statement that $`\stackrel{~}{g}`$ is a morphism of principal $`G`$-bundles, not merely a morphism of bundles, means that, in addition to the commutativity of the diagram above, the action of $`G`$ on the total space must commute with $`\stackrel{~}{g}`$. In other words, $`\stackrel{~}{g}(hp)=h\stackrel{~}{g}(p)`$ for all $`hG`$ and $`pP`$. Furthermore, in order to lift the action of the entire group $`\mathrm{\Gamma }`$ and not just its elements, we shall impose the constraint that $$\stackrel{~}{g_1g_2}=\stackrel{~}{g}_1\stackrel{~}{g}_2$$ (2) for all $`g_1,g_2\mathrm{\Gamma }`$. The constraint given in equation (2) is quite important; one is not always guaranteed of finding lifts that satisfy (2). As an example<sup>7</sup><sup>7</sup>7We would like to thank A. Knutson for pointing out this example to us., we shall examine the nontrivial $`𝐙_2`$ bundle over $`S^1`$. Consider the group $`\mathrm{\Gamma }=𝐙_2`$ that acts on the base $`S^1`$ as a half-rotation of the circle. On the total space of the nontrivial $`𝐙_2`$ bundle, essentially a $`720^{}`$ object, $`\mathrm{\Gamma }`$ must act by rotation by either $`+180^{}`$ or $`180^{}`$, in order to cover the action of $`\mathrm{\Gamma }`$ on the base $`S^1`$ (i.e., in order for diagram (1) to commute). Unfortunately, neither such action on the total space of the bundle squares to the identity, and so equation (2) can not be satisfied in this case. As the example just given demonstrates, although trivial bundles admit lifts of orbifold group actions, not all nontrivial bundles admit lifts of orbifold group actions. Rather than digress to explain conditions for the existence of lifts, we shall simply assume lifts exist in all examples considered in this paper. (We shall make a similar assumption when discussing equivariant gerbes.) A lift of the action of $`\mathrm{\Gamma }`$ to a bundle is also called a choice of ($`\mathrm{\Gamma }`$-)equivariant structure on the bundle. In passing, we should also mention that instead of speaking of lifts, we could equivalently work in terms of pullbacks. Loosely speaking, in terms of pullbacks, a bundle $`P`$ is “almost equivariantizable” with respect to the action of $`\mathrm{\Gamma }`$ if, for all $`g\mathrm{\Gamma }`$, $`g^{}PP`$. As above, not all bundles will necessarily be equivariant with respect to some given $`\mathrm{\Gamma }`$, but we shall not discuss relevant constraints in this paper. More precisely, an equivariant bundle $`P`$ is defined to be a bundle with a choice of equivariant structure, which can be defined as a specific set of isomorphisms $`\psi _g:g^{}P\stackrel{}{}P`$ for all $`g\mathrm{\Gamma }`$, subject to the obvious analogue of equation (2). It is easy to check that the definitions of equivariant structures in terms of lifts and in terms of pullbacks are equivalent. For completeness, we shall outline the argument here. Let $`\psi _g:g^{}P\stackrel{}{}P`$ define an equivariant structure (in terms of pullbacks) on a principal bundle $`P`$. Then, we can define a lift $`\stackrel{~}{g}`$ of $`g\mathrm{\Gamma }`$ as, $`\stackrel{~}{g}=\psi _{g^1}(g^1)^{}`$. The reader can easily check that $`\stackrel{~}{g}`$ does indeed define a lift of $`g`$, as defined above. Conversely, given a lift $`\stackrel{~}{g}`$ of $`g\mathrm{\Gamma }`$, we can define an equivariant structure (in terms of pullbacks) $`\psi _g:g^{}P\stackrel{}{}P`$ by, $`\psi _g^1=g^{}\stackrel{~}{g}`$. It is easy to check that the two constructions given here are inverses of one another. How many possible lifts of the action of a given $`g\mathrm{\Gamma }`$ exist? Given any one lift, we can certainly make any other lift by composing the action of the lift with a gauge transformation. More precisely, given a set of lifts $`\{\stackrel{~}{g}|g\mathrm{\Gamma }\}`$, we can construct a new set of lifts $`\{\stackrel{~}{g}^{}\}`$ by composing each $`\stackrel{~}{g}`$ with a gauge transformation $`\varphi _g:XG`$ such that $`\varphi _{g_2}(x)\varphi _{g_1}(g_2^1x)=\varphi _{g_2g_1}(x)`$ for all $`xX`$. Moreover, any two lifts differ by a set of such gauge transformations. We can rephrase this by saying that any two lifts of the action of $`\mathrm{\Gamma }`$ to $`P`$ differ by an element of $`\text{Hom}(\mathrm{\Gamma },C^{\mathrm{}}(G))=H^1(\mathrm{\Gamma },C^{\mathrm{}}(G))`$. Now, from our knowledge of orbifold Wilson lines, we will eventually want (equivalence classes of) lifts to be classified by $`H^1(\mathrm{\Gamma },G)=\text{Hom}(\mathrm{\Gamma },G)`$, but above we only have $`H^1(\mathrm{\Gamma },C^{\mathrm{}}(G))`$. What have we forgotten? So far we have only studied how to extend the action of $`\mathrm{\Gamma }`$ to the total space of a bundle. We have not yet spoken about further requiring the action of $`\mathrm{\Gamma }`$ to preserve the connection on the bundle. This requirement will place additional constraints on the lifts. When we are done, we will see that by considering lifts of the action of $`\mathrm{\Gamma }`$ to line bundles with connection, instead of just line bundles, we will recover the classification $`H^1(\mathrm{\Gamma },G)`$, as desired. For more information on equivariant bundles, see for example . ### 2.3 Equivariant bundles with connection In the previous subsection we described the action of the group $`\mathrm{\Gamma }`$ on principal $`G`$-bundles, for $`G`$ an abelian Lie group. In this section we shall extend this discussion to include consideration of a connection on a bundle. We shall argue that equivalence classes of lifts of the action of $`\mathrm{\Gamma }`$ to pairs (line bundle, connection) are classified by the group cohomology group $`H^1(\mathrm{\Gamma },G)`$. (More precisely, we shall find a non-canonical correspondence between equivariant structures on principal $`G`$-bundles with connection and elements of $`H^1(\mathrm{\Gamma },G)`$. In special cases, such as trivial principal $`G`$-bundles, there will be a canonical correspondence.) For more information on connection-preserving lifts, see<sup>8</sup><sup>8</sup>8There is also related information in \[20, sections 2.4, 2.5\] and \[21, section V.2\]. These references analyze a distinct but related problem; their discussion might at first confuse the reader. Specifically, instead of considering representations of $`\mathrm{\Gamma }`$ in the group of all connection-preserving lifts of diffeomorphisms of the base, they study the space of connection-preserving lifts itself, and argue that it is a central extension of the group of bundle-with-connection-preserving diffeomorphisms of the base by $`U(1)`$, for principal $`U(1)`$-bundles. The reader might be then tempted to argue that lifts of $`\mathrm{\Gamma }`$ should be classified by $`H^2(\mathrm{\Gamma },U(1))`$, but this is not quite correct. In particular, when viewing the set of all connection-preserving lifts as a central extension, the elements that project to $`\mathrm{\Gamma }`$ will not, in general, form a representation of $`\mathrm{\Gamma }`$, i.e., will not satisfy equation (2). and \[24, section 1.13\]. Before going on, we shall take a moment to very briefly review connections on principal bundles and what it means for a lift to preserve a connection. One way to think of a connection on a bundle is as a set of gauge fields $`A_\mu `$, one for each element of a good cover of the base. However, there is a slightly more elegant description which we shall use instead \[23, section Vbis.A\]. If $`P`$ is the total space of a principal $`G`$-bundle on $`X`$, then a connection on $`P`$ is a map $`TP\text{Lie }G`$, or a $`(\text{Lie }G)`$-valued 1-form on $`P`$, satisfying certain properties we shall not describe here (see instead, for example, \[22, section 11.4\] or \[23, section V bis A1\]). Given an open set $`UX`$ such that $`P|U`$ is trivial, let $`s:UP`$ be a section, and let $`\alpha `$ denote the connection on $`P`$, i.e., the corresponding $`(\text{Lie }G)`$-valued 1-form, then we can recover a gauge field on the base $`X`$ as $`s^{}\alpha `$. Any two distinct sections $`s_1,s_2:UP`$ define gauge fields differing by a gauge transformation, i.e., $`s_1^{}\alpha =s_2^{}\alpha (dg)g^1`$. If $`\varphi :XG`$ defines a gauge transformation, then it acts on the connection $`\alpha `$ as (\[23, section V bis, problem 1\], \[24, section 1.10\]) $$\alpha (p)\varphi (\pi (p))\alpha (p)\varphi ^1(\pi (p))d\mathrm{ln}(\varphi \pi )(p)$$ for $`pP`$ and $`\pi :PX`$ the projection. Clearly, a gauge transformation $`\varphi :XG`$ will preserve the connection (not just up to gauge equivalence) if and only if $`\varphi `$ is a constant map. How does a morphism of principal bundles act on a connection? Let $`\tau :P_1P_2`$ denote a morphism of principal bundles, then if $`\alpha _2`$ is a connection on $`P_2`$, $`\tau ^{}\alpha _2`$ (defined in the obvious way) is a connection on $`P_1`$. More relevantly to the problem under discussion, if $`g\mathrm{\Gamma }`$ and $`\stackrel{~}{g}`$ denotes the lift of $`g`$, then we shall say that $`\stackrel{~}{g}`$ preserves the connection $`\alpha `$ if $`\stackrel{~}{g}^{}\alpha =\alpha `$, not just up to gauge transformation. In order to get a well-defined connection living over the quotient $`X/\mathrm{\Gamma }`$, we shall demand that the lift of the action of $`\mathrm{\Gamma }`$ preserves the connection itself, not just its gauge-equivalence class. (If this were not the case, then we would not be able to immediately write down a well-defined connection over the quotient space.) Phrased another way, a lift of the action of $`\mathrm{\Gamma }`$ on $`X`$ will yield a well-defined connection on the quotient precisely if it can be deformed by an element of $`H^1(\mathrm{\Gamma },C^{\mathrm{}}(G))`$ so that it preserves the connection itself, not just its gauge-equivalence class. Phrased another way still, if we merely demanded that the lift of $`\mathrm{\Gamma }`$ preserve only the gauge-equivalence class of the connection, then naively spoke of the gauge-equivalence class descending to the quotient, we would not be guaranteed of finding any representatives of the quotiented gauge-equivalence class. Necessary and sufficient conditions for a lift of $`\mathrm{\Gamma }`$ to preserve the connection are known and easy to describe . In fact, the action of $`g\mathrm{\Gamma }`$ on the base $`X`$ is liftable to a map of bundle with connection if and only if the action of $`g`$ preserves the values<sup>9</sup><sup>9</sup>9Strictly speaking, preserves the values of the Wilson loops up to conjugation; however, for bundles with abelian structure group, conjugation acts trivially. of Wilson loops on the base \[24, prop. 1.12.2\]. (Note that even for bundles with a non-flat connection – nontrivial bundles – one can still define Wilson loops – however, only in the special case of a flat connection will the value of a Wilson loop depend only on the homotopy class of the loop.) The reader should not be surprised by this result, as this fact is often implicitly claimed in the literature on toroidal heterotic orbifolds, for example. Now, how many lifts of $`\mathrm{\Gamma }`$ preserve the connection itself? Let $`\{\stackrel{~}{g}\}`$ denote a lift of $`\{g\mathrm{\Gamma }\}`$ that preserves the connection itself. We can compose $`\{\stackrel{~}{g}\}`$ with an element of $`H^1(\mathrm{\Gamma },C^{\mathrm{}}(G))`$ to get another lift, but only the constant elements, namely those in $`H^1(\mathrm{\Gamma },G)H^1(\mathrm{\Gamma },C^{\mathrm{}}(G))`$, will act so as to preserve the connection itself. Thus, $`H^1(\mathrm{\Gamma },G)`$ acts on the set of connection-preserving lifts of $`\mathrm{\Gamma }`$, and it should be clear this action is both transitive and free. Note that in the very special case that the equivariant bundle on $`X`$ is trivial, then there is (distinguished) trivial lift, and so there is a canonical correspondence between elements of $`H^1(\mathrm{\Gamma },G)`$ and connection-preserving lifts. For more general bundles, there is no such distinguished lift. As this result is important, we shall repeat it. We have just shown that ($`\mathrm{\Gamma }`$-)equivariant structures on $`G`$-bundles with connection can be (noncanonically) identified with elements of $`H^1(\mathrm{\Gamma },G)`$. In special cases, such as trivial bundles, there is a canonical identification. In passing, note that our analysis of equivariant structures on bundles with connection did not assume $`\mathrm{\Gamma }`$ was freely-acting or that $`\mathrm{\Gamma }`$ be abelian: our analysis applies equally well to cases in which $`\mathrm{\Gamma }`$ has fixed points on the base space, as well as cases in which $`\mathrm{\Gamma }`$ is nonabelian. ### 2.4 Example: heterotic orbifolds As a more explicit example, let us consider how to define a toroidal heterotic orbifold. Here we have some principal $`G`$ bundle (for some $`G`$) over the torus, which for simplicity we shall assume to be trivial<sup>10</sup><sup>10</sup>10Although the bundle has a flat connection, it need not be topologically trivial or even trivializable – this is a stronger constraint than necessary, which we are introducing in order to keep this warm-up example simple.. We shall also assume the connection on the bundle on the torus is not merely flat, but actually trivial. In these special circumstances, any diffeomorphism of the base lifts to an action on the bundle. Now, to define a lift of an action of $`\mathrm{\Gamma }`$ on the torus to the total space of the principal bundle is trivial. Since the total space of a trivial principal bundle is just $`X\times G`$, clearly we can lift the action of $`\mathrm{\Gamma }`$ to the total space by defining it to be trivial on the fiber $`G`$. (More generally, for a nontrivial bundle, demanding that the group $`\mathrm{\Gamma }`$ lift to an action on the total space of the bundle is not trivial. Depending upon both $`X`$ and the bundle in question, there are often obstructions.) Given any one such lift, we can find all other possible lifts simply by composing the trivial lift with a gauge transformation. In order to get a well-defined connection on the quotient space, however, there are some constraints on the possible lifts. First, note that in these special circumstances, we can describe any lift as the composition of the trivial lift with a gauge transformation. For any $`g\mathrm{\Gamma }`$, let $`\varphi _g:XG`$ denote the corresponding gauge transformation. Then in order to preserve the connection itself, $`\varphi _g`$ must be constant, in other words, $`\varphi _g=ϵ(g)`$ for some $`ϵ:\mathrm{\Gamma }G`$. These $`ϵ(g)`$ define the usual orbifold Wilson lines. ### 2.5 Discussion in terms of Čech cohomology Eventually in this paper we will work through arguments closely analogous to those above to derive analogues of orbifold Wilson lines for gerbes. In order to do this properly is somewhat difficult and time-consuming – (1-)gerbes are properly described in terms of sheaves of categories, and their full analysis can be somewhat lengthy. In order to give some general intuition for the results at the level of Hitchin’s discussion of gerbes, we will eventually give a rather loose derivation of the results in terms of Čech cohomology. (A rigorous derivation will appear in .) As a warm-up for that eventual discussion, in this section we shall very briefly describe how one can re-derive orbifold Wilson lines while working at the level of Čech cohomology, i.e., at the level of transition functions for bundles. We feel that such an approach is philosophically somewhat flawed – the transition functions of a bundle do not really capture all information about the bundle. For example, gauge transformations of a bundle are completely invisible at the level of transition functions. Thus, we do not find the notion of defining an equivariant structure on a bundle by putting an equivariant structure on its transition functions to be completely satisfying. Thus, when we study equivariant gerbes, we shall not limit ourselves to only discussing equivariant structures on gerbe transition functions, but shall also discuss equivariant structures on the gerbes themselves. Experts will note that in this subsection we implicitly make some assumptions regarding the behavior of bundle trivializations under the action of the orbifold group. As our purpose in this subsection is not to give a rigorous derivation but merely to perform an enlightening calculation, we shall gloss over such issues. Let $`P`$ be a principal $`G`$-bundle on a manifold $`X`$, and let $`\mathrm{\Gamma }`$ be a group acting on $`X`$ by diffeomorphisms. Let $`\{U_\alpha \}`$ be a “good invariant” cover of $`X`$, by which we means that each $`U_\alpha `$ is invariant under $`\mathrm{\Gamma }`$, and each $`U_\alpha `$ is a disjoint union of contractible open sets. For example, one can often get good invariant covers of a space $`X`$ from good covers of the quotient $`X/\mathrm{\Gamma }`$. Note that a good invariant cover is not a good cover, in general. We shall assume good invariant covers exist in this subsection, though it is not clear that this need be true in general. (Again, our purpose in this subsection is to present enlightening calculations and plausibility arguments, not completely rigorous proofs.) Let $`h_{\alpha \beta }`$ denote transition functions for the bundle $`P`$. Assume $`P`$ has an equivariant structure, which at the level of transition functions means that for all $`g\mathrm{\Gamma }`$ there exist functions $`\nu _\alpha ^g:U_\alpha G`$ such that $$g^{}h_{\alpha \beta }(=h_{\alpha \beta }g)=\nu _\alpha ^gh_{\alpha \beta }(\nu _\beta ^g)^1$$ (3) The functions $`\nu _\alpha ^g`$ are local trivialization realizations of an isomorphism of principal $`G`$-bundles $`\nu ^g:P\stackrel{}{}g^{}P`$ for each $`g\mathrm{\Gamma }`$. It should be clear that $`\nu ^g=(\psi _g)^1`$ where the $`\psi _g`$ were defined in the section on equivariant bundles. The $`\nu ^g`$ partially specify an equivariant structure on $`P`$, but we also need a little more information. In particular, we must also demand that for $`g_1,g_2\mathrm{\Gamma }`$, $$\nu _\alpha ^{g_2}g_2^{}\nu _\alpha ^{g_1}=\nu _\alpha ^{g_1g_2}$$ (4) Note that this is the appropriate Čech version of equation (2). Now, suppose $`\nu _\alpha ^g`$ and $`\overline{\nu }_\alpha ^g`$ define two distinct equivariant structures on $`P`$, with respect to the same group $`\mathrm{\Gamma }`$. Define $`\varphi _\alpha ^g:U_\alpha U(1)`$ by, $$\varphi _\alpha ^g\frac{\nu _\alpha ^g}{\overline{\nu }_\alpha ^g}$$ From the fact that both $`\nu _\alpha ^g`$ and $`\overline{\nu }_\alpha ^g`$ must satisfy equation (3), we can immediately derive the fact that $$\varphi _\alpha ^g=\varphi _\beta ^g$$ on $`U_{\alpha \beta }=U_\alpha U_\beta `$, and so the $`\varphi _\alpha ^g`$ define a function $`\varphi ^g:XU(1)`$. This is a gauge transformation describing the difference between two equivariant structures. It is almost, but not quite, the same as the gauge transformation $`\varphi _g`$ described in the section on equivariant bundles. From equation (4), we see that the $`\varphi ^g`$ must obey $$\varphi ^{g_2}g_2^{}\varphi ^{g_1}=\varphi ^{g_1g_2}$$ Thus, any two equivariant structures on $`P`$ differ by an element of $`H^1(\mathrm{\Gamma },C^{\mathrm{}}(G))`$, as described in the section on equivariant bundles. We shall now recover the fact that equivariant structures on bundles with connection differ by elements of $`H^1(\mathrm{\Gamma },G)`$, for abelian $`G`$. Let $`A^\alpha `$ be a ($`\text{Lie }G`$)-valued one-form on the open set $`U_\alpha `$, defining a connection on $`P`$. In other words, on overlaps $`U_{\alpha \beta }=U_\alpha U_\beta `$, $$A^\alpha A^\beta =d\text{ln }h_{\alpha \beta }$$ Under the action of $`g\mathrm{\Gamma }`$, since $$g^{}h_{\alpha \beta }=\nu _\alpha ^gh_{\alpha \beta }(\nu _\beta ^g)^1$$ we know that $$g^{}A^\alpha =A^\alpha +d\text{ln }\nu _\alpha ^g$$ From this we see that if $`\nu _\alpha ^g`$ and $`\overline{\nu }_\alpha ^g`$ define two distinct equivariant structures on the transition functions, then we must have $`\nu _\alpha ^g/\overline{\nu }_\alpha ^g`$ be constant, in order for $`g^{}A^\alpha `$ to be well-defined. Thus, we recover the fact that $`\varphi ^g`$ is constant, and so the subset of $`H^1(\mathrm{\Gamma },C^{\mathrm{}}(G))`$ that describes equivariant bundles with connection is given by $`H^1(\mathrm{\Gamma },G)`$, as claimed. In essence, we have been using a form of equivariant Čech cohomology. The mathematics literature seems to contain multiple<sup>11</sup><sup>11</sup>11One version of equivariant Čech cohomology is described in \[25, chapitre V\]. Another version is described in \[26, section 2\] and \[27, section 5\]. versions of equivariant Čech cohomology, unfortunately none of them are quite adequate for our eventual needs (i.e., none of them correspond to the precise way we set up equivariant structures on gerbes), and so we shall not speak about them further. ## 3 $`n`$-Gerbes Discrete torsion has long been associated with the two-form $`B`$-fields of supergravity theories. The reader should not be surprised, therefore, that a deep understanding of discrete torsion hinges on a deep understanding of the two-form $`B`$ fields. We shall argue that $`B`$ fields should be understood as connections on 1-gerbes, and that discrete torsion arises when lifting the action of an orbifold group to a 1-gerbe with connection. Why might one want a new mathematical object to describe $`B`$-fields in type II string theory? The reason is a dilemma that has no doubt puzzled for many years. The torsion<sup>12</sup><sup>12</sup>12No relation to the various mathematical concepts of torsion. $`H`$ is defined to be $`H=dB`$. So long as $`H`$ is taken to be cohomologically trivial, this is certainly a sensible definition. Unfortunately, in general one often wants to speak of $`H`$ which is not a cohomologically trivial element of $`H^3(𝐑)`$. In such a case, the relation $`H=dB`$ can only hold locally. (This point has been made previously in, for example, .) We shall see shortly that such $`H`$ can be understood globally as a connection on a $`1`$-gerbe. More generally, many of the tensor field potentials appearing in type II string theories naively appear to have a natural and obvious interpretation in terms of connections on $`n`$-gerbes, though for the sake of simplifying the discussion we will usually only discuss the two-form tensor field in examples. In passing, we should also mention that although some tensor field potentials may be understood in terms of gerbes, it is not clear that all tensor field potentials have such an understanding. One notable exception is the $`B`$-field of heterotic string theory. Recall that one has the anomaly cancellation constraint $$dH\text{Tr }RR\text{Tr }FF$$ If the heterotic $`B`$ field were a 1-gerbe connection, then we shall see that the curvature $`H`$ would be a closed form, whereas that is certainly not the case here in general. Moreover, many other tensor field potentials have nontrivial interactions and “mixed” gauge transformations, and it is not at all clear whether these phenomena can be understood in terms of gerbes. As a result, one should be somewhat careful about blindly identifying all tensor field potentials with connections on gerbes. These issues should not arise for the comparatively simpler cases of type II 2-form potentials (with other background fields turned off), which is the primary case of interest for us in this paper. We should also mention a slight technical caveat. We shall only be discussing gerbes with “band” $`C^{\mathrm{}}(U(1))`$ , which means, in less technical language, that there exist more general gerbes than those discussed in this section. For example, some theories contain multiple coupled tensor multiplets (for one example, see ), which would be described in terms of connections on gerbes with “band” $`C^{\mathrm{}}(U(1)^N)`$. We shall not discuss gerbes with general bands in this paper; see instead . In this section we will give a description of gerbes and connections on gerbes, due to . We shall begin by defining gerbes themselves, then afterwards we shall describe connections on gerbes. In the next section we will discuss the analogue of “orbifold Wilson lines” for gerbes. ### 3.1 Description in terms of cohomology We shall begin by describing characteristic classes of abstract objects called “$`n`$-gerbes,” following the discussion in . These characteristic classes, which for $`n`$-gerbes on a space $`X`$, are elements of the sheaf cohomology group $`H^{n+1}(X,C^{\mathrm{}}(U(1)))`$. This is closely analogous to describing a line bundle in terms of Chern classes. More intrinsic definitions of gerbes are given in the next section and in . Gerbes themselves take considerably longer to define; by first describing their characteristic classes, we hope to give the reader some basic intuitions for these objects. In passing we should comment on our usage of the terminology “$`n`$-gerbe.” We are following the simplified conventions of . In general, an $`n`$-gerbe should, morally, be understood in terms of sheaves of multicategories. Unfortunately, $`n`$-categories for $`n>2`$ are not well understood at present. As a result, although $`1`$-gerbes and, to a slightly lesser extent, $`2`$-gerbes are well understood, higher degree gerbes are not on as firm a footing. It seems reasonably clear that such objects should exist, however, and one can certainly describe many properties that a general $`n`$-gerbe should possess in terms of characteristic classes (as in this section) and Deligne cohomology. Thus, we shall often speak (loosely) of general $`n`$-gerbes, though for $`n>2`$ the reader should probably take such remarks with a small grain of salt. A couple of paragraphs ago we mentioned that the characteristic classes of gerbes could be understood in terms of sheaf cohomology, and more specifically that the characteristic classes of possible $`n`$-gerbes on a space $`X`$ live in $`H^{n+1}(X,C^{\mathrm{}}(U(1)))`$. For those readers not acquainted with sheaf cohomology, we can express this somewhat more simply (and loosely) in terms of Čech cocycles with respect to some fixed open cover. Let $`U_\alpha `$ be a “reasonably nice<sup>13</sup><sup>13</sup>13Specifically, a good open cover.” open cover of $`X`$. Then an element of $`H^{n+1}(X,C^{\mathrm{}}(U(1)))`$ is essentially defined by a set of smooth functions $`h_{\alpha _0\mathrm{}\alpha _{n+1}}:U_{\alpha _0\mathrm{}\alpha _{n+1}}U(1)`$, one for each overlap $`U_{\alpha _0\mathrm{}\alpha _{n+1}}=U_{\alpha _0}\mathrm{}U_{\alpha _{n+1}}`$, subject to the constraint $$(\delta h)_{\alpha _0\mathrm{}\alpha _{n+2}}=\mathrm{\hspace{0.25em}1}$$ (5) where $`\delta h`$ is defined by $$(\delta h)_{\alpha _0\mathrm{}\alpha _{n+2}}\underset{i=0}{\overset{n+2}{}}h_{\alpha _0\mathrm{}\widehat{\alpha _i}\mathrm{}\alpha _{n+2}}^{()^i}$$ on the intersection $`U_{\alpha _0}\mathrm{}U_{\alpha _{n+2}}`$. Two such sets of functions $`h_{\alpha _0\mathrm{}\alpha _{n+1}}`$, $`h_{\alpha _0\mathrm{}\alpha _{n+1}}^{}`$ are identified with the same element of $`H^{n+2}(X,C^{\mathrm{}}(U(1)))`$ if and only if $$h_{\alpha _0\mathrm{}\alpha _{n+1}}=h_{\alpha _0\mathrm{}\alpha _{n+1}}^{}\underset{i=0}{\overset{n+1}{}}f_{\alpha _0\mathrm{}\widehat{\alpha _i}\mathrm{}\alpha _{n+1}}^{()^i}$$ (6) for some functions $`f_{\alpha _0\mathrm{}\alpha _n}:U_{\alpha _0}\mathrm{}U_{\alpha _n}U(1)`$. As a special case, let us see how this duplicates line bundles. In the classification of $`n`$-gerbes implicit in the description of characteristic classes above, it should be clear that line bundles are very special examples of $`n`$-gerbes, specifically, $`0`$-gerbes. A $`0`$-gerbe is specified by an element of $`H^1(X,C^{\mathrm{}}(U(1)))`$, that is, a set of smooth functions $`h_{\alpha \beta }:U_\alpha U_\beta U(1)`$, such that $$h_{\beta \gamma }h_{\alpha \gamma }^1h_{\alpha \beta }=\mathrm{\hspace{0.25em}1}$$ on triple intersections $`U_\alpha U_\beta U_\gamma `$ (this is the specialization of equation (5)). The reader should immediately recognize this as defining transition functions for a smooth $`U(1)`$ bundle on $`X`$. Equation (5) is precisely the statement that transition functions agree on triple overlaps. Moreover, two $`U(1)`$ line bundles are equivalent if and only if their transition functions $`h_{\alpha \beta }`$, $`h_{\alpha \beta }^{}`$ are related by \[30, chapter 5.2\] $$h_{\alpha \beta }=h_{\alpha \beta }^{}f_\alpha /f_\beta $$ for some set of functions $`f_\alpha :U_\alpha U(1)`$. The reader should immediately recognize this as the specialization of equation (6). Although the sheaf cohomology group $`H^1(X,C^{\mathrm{}}(U(1)))`$ precisely describes (equivalence classes of) transition functions for $`0`$-gerbes (smooth principal $`U(1)`$ bundles), the same is not true for higher degree gerbes – an element of sheaf cohomology for a higher degree gerbe does not define a set of transition functions. (We shall study transition functions for gerbes in the next subsection.) We can rewrite these characteristic classes of $`n`$-gerbes in a format that is more accessible to calculation. Using the short exact sequence of ($`C^{\mathrm{}}`$) sheaves $$0C^{\mathrm{}}(𝐙)𝐙C^{\mathrm{}}(𝐑)C^{\mathrm{}}(U(1))0$$ one can immediately prove, from the associated long exact sequence, that for $`n0`$, $$H^{n+1}(X,C^{\mathrm{}}(U(1)))H^{n+2}(X,𝐙)$$ As a special case, this implies that $`0`$-gerbes are classified by elements of $`H^2(X,𝐙)`$, and indeed it is a standard fact that $`C^{\mathrm{}}`$ line bundles are classified by their first Chern class. In general, any two trivializations of a trivializable $`n`$-gerbe, that is, one described by a cohomologically trivial $`(n+1)`$-cocycle, differ by an $`(n1)`$-gerbe. This should be clear from the description above – any cohomologically trivial $`(n+1)`$-cocycle is a coboundary of some $`n`$-cochain, and any two such cochains differ by an $`n`$-cocycle, defining an $`(n1)`$-gerbe. Before going on, we should mention that in the remainder of this paper (as well as ) we shall usually abbreviate “1-gerbes” to simply “gerbes.” Unfortunately, on rare occasion we shall also use “gerbes” as shorthand for $`n`$-gerbes. The usage should be clear from context. ### 3.2 Description in terms of transition functions In the previous section we described $`n`$-gerbes in terms of sheaf cohomology, which is precisely analogous to describing line bundles in terms of Chern classes. Traditionally gerbes are typically defined in terms of sheaves of multicategories, as we shall do in . In this section, we shall give a simplified account, due to , which amounts to describing gerbes in terms of transition functions. In we shall review sheaves of categories and the description of 1-gerbes in such language, in addition to giving a geometric first-principles derivation of discrete torsion. Before grappling with transition functions for $`n`$-gerbes, we shall begin with a description of transition functions for 1-gerbes. Let $`\{U_\alpha \}`$ be a good cover of $`X`$, then we can define a 1-gerbe on $`X`$ in terms of two pieces of data: 1. A principal $`U(1)`$ bundle $`_{\alpha \beta }`$ over each $`U_{\alpha \beta }=U_\alpha U_\beta `$ (subject to the convention $`_{\beta \alpha }=_{\alpha \beta }^1`$), such that $`_{\alpha \beta }_{\beta \gamma }_{\gamma \alpha }`$ is trivializable on $`U_{\alpha \beta \gamma }`$ 2. An explicit trivialization $`\theta _{\alpha \beta \gamma }:U_{\alpha \beta \gamma }U(1)`$ of $`_{\alpha \beta }_{\beta \gamma }_{\gamma \alpha }`$ on $`U_{\alpha \beta \gamma }`$ Then, $`\theta `$ naturally defines a Čech 2-cochain, and it should be clear that $`\delta \theta =1`$, i.e., the canonical section of the canonically trivial bundle defined by tensoring the obvious twelve factors of principal $`U(1)`$ bundles. Thus, $`\theta `$ defines a 2-cocycle, and it should be clear that this 2-cocycle is the same 2-cocycle defining the 1-gerbe in the description in the previous subsection. We should take a moment to clarify the precise relationship between the construction above and 1-gerbes defined in terms of sheaves of categories. In the description of gerbes in terms of sheaves of categories, one can define transition functions for the gerbe with respect to a local trivialization, in precise analogy to transition functions for bundles. However, for 1-gerbes the objects one associates to overlaps of open sets are not maps into the group, but rather line bundles, subject to a constraint on triple overlaps. Put more directly, the description given in the paragraphs above precisely describes transition functions for a 1-gerbe. The corresponding element of sheaf cohomology is merely a characteristic class, classifying isomorphism classes. Thus, the description of 1-gerbes given so far in this section is technically a description of transition functions for 1-gerbes. The reader may well wonder what is a 1-gerbe; the answer is, a special kind of sheaf of categories. Sheaves of categories and related concepts have been banished to , but we shall give a very quick flavor of the construction here. Sometimes one speaks of “objects” of the 1-gerbe. These are line bundles $`_\alpha `$ over open sets $`U_\alpha `$, such that $`_{\alpha \beta }=_\alpha _\beta ^1`$. Objects exist locally on $`X`$, but in general will not exist globally (unless the 1-gerbe is trivializable, meaning the associated Čech 2-cocycle is trivial in cohomology). In more formal treatments of gerbes, one often associates sheaves of categories with gerbes<sup>14</sup><sup>14</sup>14More precisely, there is a standard method to associate sheaves of 1-categories and 2-categories to 1-gerbes and 2-gerbes, respectively. The higher-degree gerbes outlined in presumably correspond to sheaves of higher-degree multicategories, however the precise definitions required have not been worked out, to our knowledge.. In this description, the “objects” mentioned above are precisely objects of a category associated to some open set on $`X`$. We shall not go into a detailed description of gerbes as sheaves of categories in this section; see instead . Now that we have discussed 1-gerbes, how are $`n`$-gerbes defined? In principle an analogous description should hold true – transition functions for an $`n`$-gerbe should consist of associating an $`(n1)`$-gerbe to each overlap, subject to constraints at triple overlaps. Although we are well-acquainted with more intrinsic definitions of 1-gerbes , we have not worked through higher $`n`$-gerbes in comparable detail, and so we hesitate to say much more concerning transition functions for higher order gerbes. We hope to return to this elsewhere . ### 3.3 Connections on gerbes Now that we have stated the definition of an $`n`$-gerbe, we shall define a connection on an $`n`$-gerbe, which is a straightforward generalization of the notion of connection on a $`C^{\mathrm{}}`$ line bundle. For simplicity, fix some good open cover $`U_\alpha `$ of $`X`$. A connection on an $`n`$-gerbe is defined by a choice of $`H\mathrm{\Omega }^{n+2}(X)`$ such that $`dH=0`$ (a closed $`(n+2)`$-form on $`X`$), and a choice of $`(B_\alpha )C^1(\mathrm{\Omega }^{n+1})`$, that is, a choice of smooth $`(n+1)`$-form on $`U_\alpha `$ for each $`\alpha `$, such that on each $`U_\alpha `$, $`H|_{U_\alpha }=dB_\alpha `$, and such that on overlaps $`U_\alpha U_\beta `$, $`B_\alpha B_\beta =dA_{\alpha \beta }`$, where $`A_{\alpha \beta }`$ is a smooth $`n`$-form on $`U_{\alpha \beta }`$. In general there will more more terms, of lower-degree-forms, filling out an entire cocycle in the Čech-de Rham complex. To be complete, we need to specify how the forms on various open sets are related by the transition functions for the $`n`$-gerbe. For simplicity, consider a 1-gerbe. Here, we have a globally-defined 3-form $`H`$, a family of 2-forms $`B_\alpha `$, one for each open set $`U_\alpha `$, a family of 1-forms $`A_{\alpha \beta }`$, one for each intersection $`U_{\alpha \beta }=U_\alpha U_\beta `$. Recall transition functions for a 1-gerbe consist of line bundles $`_{\alpha \beta }`$ associated to each $`U_{\alpha \beta }`$; the 1-forms $`A_{\alpha \beta }`$ are precisely connections<sup>15</sup><sup>15</sup>15Note we are implicitly using the fact that the $`\{U_\alpha \}`$ form a good cover, so each $`U_{\alpha \beta }`$ is contractible. on the $`U(1)`$ bundles $`_{\alpha \beta }`$. If $`\theta _{\alpha \beta \gamma }`$ denotes the specified trivialization of $`_{\alpha \beta }_{\beta \gamma }_{\gamma \alpha }`$ on $`U_{\alpha \beta \gamma }`$, then we have $$A_{\alpha \beta }+A_{\beta \gamma }+A_{\gamma \alpha }=d\text{ln }\theta _{\alpha \beta \gamma }$$ Then, as per the description above, $$B_\alpha B_\beta =dA_{\alpha \beta }$$ on overlaps $`U_{\alpha \beta }`$, and $$H|_{U_\alpha }=dB_\alpha $$ In principle, similar remarks hold for more general $`n`$-gerbes. The reader should immediately recognize that a connection on a $`1`$-gerbe is precisely the same thing as a type II string theory $`B`$-field. (The point that $`B`$ fields and the relation $`H=dB`$ should really only be understood locally has been made previously in the physics literature, albeit not usually in terms of connections on gerbes; see for example .) This relationship seems to be well understood in certain parts of the field; we repeat it here simply to make this note more self-contained. In general, the reader should recognize that tensor field potentials appearing in type II supergravities often look like connections on gerbes. The reader should also notice that a connection on a $`0`$-gerbe precisely coincides with the usual notion of connection on a smooth line bundle. To make this more clear, change notation as follows: change $`H`$ to $`F`$, and change $`B`$ to $`A`$. For a connection on a smooth line bundle, locally we have the relation $`F=dA`$, but globally this does not hold if $`F`$ is a nonzero element of $`H^2(X,𝐑)`$. In the special case that $`F`$ descends from an element of $`H^2(X,𝐙)`$ via the natural map $`H^2(X,𝐙)H^2(X,𝐑)`$, then there exists a $`C^{\mathrm{}}`$ line bundle whose first Chern class is represented by $`F`$. (In particular, Kähler forms can be interpreted as the first Chern classes of (holomorphic) line bundles precisely when the Kähler form lies in the image of $`H^2(X,𝐙)`$ in $`H^2(X,𝐑)`$.) Analogously, for an $`n`$-gerbe, when the curvature $`H`$ descends from an element of $`H^{n+2}(X,𝐙)`$, then there exists an $`n`$-gerbe whose characteristic class is defined by $`H`$. In fact, we have been slightly sloppy about certain details. Suppose that $`H^{n+2}(X,𝐙)`$ contains torsion<sup>16</sup><sup>16</sup>16In the mathematical sense., that is, elements of finite order, then if an $`n`$-gerbe-connection defines an $`n`$-gerbe, it does not do so uniquely – one will get several $`n`$-gerbes, each of which has a characteristic class that descends to the same element of $`H^{n+2}(X,𝐑)`$. Are these extra degrees of freedom physically relevant – in other words, must there be an actual gerbe underlying these connections? It is easy to see that an actual gerbe must underlie such connections. The point is that torsion elements of $`H^{n+2}(X,𝐙)`$ contain physically relevant information, as was noted in, for example, . Given that $`n`$-gerbes can be loosely interpreted as one generalization of line bundles, the reader may wonder if there is some gerbe-analogue of the holonomy of a flat $`U(1)`$ connection. Indeed, it is possible to define the holonomy of a flat $`n`$-gerbe-connection, though we shall not do so here. Such holonomies have been observed in physics previously; one example is . As mentioned earlier, gerbes are often described in terms of sheaves of categories. There is a corresponding notion of connection in such a description, which we have summarized in and can also be found in \[20, section 5.3\]. ### 3.4 Gauge transformations of gerbes For principal $`U(1)`$-bundles there is a well-defined notion of gauge transformation: a gauge transformation is defined by a map $`f:XU(1)`$. What is the analogue for $`n`$-gerbes? We shall begin by describing gauge transformations for 1-gerbes. It can be shown that the analogue of a gauge transformation for a 1-gerbe is given by a principal $`U(1)`$-bundle, and a gauge transformation of a 1-gerbe with connection is defined by a principal $`U(1)`$-bundle with connection. (Strictly speaking, an equivalence class of principal $`U(1)`$-bundles, but we shall defer such technicalities to later discussions.) A rigorous derivation of this fact and related material is given in . We shall describe the implications of this fact for connections on 1-gerbes, and for transition functions. Intuitively, how does a principal $`U(1)`$-bundle act on a 1-gerbe? Very roughly, the general idea is that given a bundle with connection $`A`$, under a gauge transformation the $`B`$ field will transform as $`BB+dA`$. (At the same level, we can see that only equivalence classes of bundles with connection are relevant. If $`A`$ and $`A^{}`$ differ by a gauge transformation (of bundles), then $`dA=dA^{}`$, and so they define the same action on $`B`$.) In terms of sheaves of categories, a 1-gerbe is locally a category of all principal $`U(1)`$-bundles, so given any one object in that category, we can tensor with a principal $`U(1)`$-bundle to recover another object. This is essentially the action, in somewhat loose language. To properly describe how a principal $`U(1)`$-bundle acts on a 1-gerbe requires understanding 1-gerbes in terms of sheaves of categories. The reader might well ask, however, how a bundle acts on the transition functions for a 1-gerbe? We described transition functions for 1-gerbes by associating principal $`U(1)`$ bundles to intersections $`U_{\alpha \beta }`$, together with an explicit trivialization of Čech coboundaries. The reader should (correctly) guess that a gauge transformation of a 1-gerbe, at the level of transition functions, should be a gauge transformation of the bundle on each coordinate overlap, such that the gauge transformations preserve the trivializations on triple intersections. In other words, a gauge transformation of a 1-gerbe should be a set of maps $`g_{\alpha \beta }:U_{\alpha \beta }U(1)`$, subject to the condition that $`\delta g=1`$. Put more simply still, a gauge transformation of a 1-gerbe is precisely a principal $`U(1)`$-bundle. Note that as expected by analogy with bundles, the transition functions are invariant (the bundles on coordinate overlaps are unchanged by gauge transformations). Note that by analogy with bundles, one should expect a gauge transformation to leave transition functions invariant – and indeed, our 1-gerbe gauge transformation does leave the transition functions invariant, as a gauge transformation of each bundle is an automorphism of the bundle. How does a gauge transformation of a 1-gerbe act on a connection on the 1-gerbe? Principal bundles have well-defined actions on gerbes; a unique specification of the action of a principal $`U(1)`$-bundle, call it $`P`$, on a gerbe connection is equivalent to a choice of connection on $`P`$. Let $`\{h_{\alpha \beta }:U_{\alpha \beta }U(1)\}`$ be transition functions for $`P`$, and $`\{A^\alpha \}`$ a set of 1-forms on elements of the cover $`\{U_\alpha \}`$ defining a connection on $`P`$. Let $`(B^\alpha ,A^{\alpha \beta },g_{\alpha \beta \gamma })`$ be a set of data defining a connection on a 1-gerbe. Then under the action of $`P`$, this data transforms as follows: $`B^\alpha `$ $``$ $`B^\alpha B^\alpha +dA^\alpha `$ $`A^{\alpha \beta }`$ $``$ $`A^{\alpha \beta }A^{\alpha \beta }+d\text{ln}h_{\alpha \beta }`$ $`g_{\alpha \beta \gamma }`$ $``$ $`g_{\alpha \beta \gamma }+\delta h`$ $`=g_{\alpha \beta \gamma }`$ More generally, the reader should correctly guess that a gauge transformation of an $`n`$-gerbe is an $`(n1)`$-gerbe. We shall not attempt to justify this statement here, however. ### 3.5 Gerbes versus K theory It has recently been claimed that the Ramond-Ramond tensor field potentials of type II theories can be understood in terms of K theory, so we feel we should take a moment to justify our assumption of a gerbe description of certain fields. In this paper, when we discuss discrete torsion, we have in mind the NS two-form field potential of type II theories, and we implicitly assume that the other tensor field potentials have vanishing vacuum expectation value, to assure that the curvature of the $`B`$ field is a closed form. In these circumstances, the $`B`$ field is well-described as a connection on a gerbe. However, in general terms the basic ideas presented in this paper should also hold more generally. At some level, the point of this paper is that in any theory containing fields with gauge invariances, specifying the orbifold group action on the base space does not suffice to define the action of the orbifold group on the theory, because the orbifold group action can always be combined with gauge transformations. To properly understand possible equivariant structures (equivalently, orbifold group actions) on tensor field potentials not described as gerbes involves certain technical distinctions, but the basic point is the same. ### 3.6 Why gerbes? So far we have presented gerbes as being a natural mathematical structure for which many of the tensor field potentials of supergravities can be understood as connections. A doubting Thomas might argue, are gerbes really necessary? After all, in supergravity theories, we only see the tensor fields themselves; why not only speak of tensor fields on coordinate charts, and forget about more abstract underlying structures? We shall answer this question with another question: why bundles? Whenever one sees a vector field with the usual gauge invariances, it is commonplace to associate it with some bundle. One can ask, why? In supergravity and gauge theories containing vector fields, one does not see a bundle, only a set of vector fields on coordinate charts. Bundles (formulated as topological spaces) describe auxiliary spaces – fibers – which are fibered over spacetime, but these auxiliary structures are neither seen nor detected in physics. There are no extra dimensions in the theory corresponding to the fiber of a fiber bundle, so why work with bundles at all? Since using bundles means invoking physically meaningless auxiliary structures, why not just ignore bundles and only work with vector fields on coordinate patches? Part of the reason people speak of bundles and not just vector fields on coordinate patches is that bundles give an insightful, elegant way of thinking about vector fields on coordinate patches. For example, recent discussions of brane charges in terms of K-theory would have been far more obscure if the notion of bundles was not commonly accepted. Similarly, the notion of a gerbe gives an insightful and elegant structure in which to understand many of the tensor field potentials appearing in supergravity theories. In principle, one could understand tensor fields without thinking about gerbes, in the same way that one can understand vector fields without thinking about bundles. However, just as bundles give an insightful and useful way to think about vector fields, so gerbes give an insightful and useful way to think about tensor fields. A slightly more subtle question that could be asked is the following. In , we shall describe 1-gerbes in terms of sheaves of categories; however, this description is not unique – gerbes can be described in several different ways. If one should work with gerbes, which description is relevant? A closely analogous problem arises in dealing with bundles. A bundle has multiple descriptions – as a topological space, or a special kind of sheaf, for example. Connections on bundles can be described in terms of vector fields, or, in special circumstances, as holomorphic structures on certain topological spaces. The correct description one should use varies depending upon the application and one’s personal taste. Similarly, which description of gerbes is relevant varies depending upon both the application and personal inclination. ## 4 Discrete torsion In defining orbifolds, it is well-known that the Riemannian space being orbifolded must have a well-defined action of the orbifold group $`\mathrm{\Gamma }`$. However, after our discussion of gerbes, the reader should suspect that something has been omitted from standard discussions of orbifolds in string theory. Namely, if we understand some of the tensor fields appearing in type II supergravities in terms of connections on gerbes, then we must also specify the precise $`\mathrm{\Gamma }`$-action on the gerbes. This action need not be trivial, and (to our knowledge) has been completely neglected in previous discussions of type II orbifolds. We shall find that the set of actions of an orbifold group $`\mathrm{\Gamma }`$ on a 1-gerbe is naturally acted upon by a group which includes $`H^2(\mathrm{\Gamma },U(1))`$, just as the set of orbifold group actions on a principal $`U(1)`$-bundle with connection is naturally acted upon by $`H^1(\mathrm{\Gamma },U(1))`$. In special cases, such as trivial gerbes for example, there will be a canonical orbifold group action, and in such cases we can identify the set of orbifold group actions is identified with a group. The possible equivariant structures (meaning, the possible orbifold group actions) correspond to analogues of orbifold Wilson lines for $`B`$-fields, in the same way that equivariant structures on a principal $`U(1)`$-bundle with connection correspond to orbifold Wilson lines. It is natural to speculate that the action of an orbifold group $`\mathrm{\Gamma }`$ on an $`n`$-gerbe is described by a group including $`H^{n+1}(\mathrm{\Gamma },U(1))`$, in a fashion analogous to the above. This can be checked for 2-gerbes in the same fashion as for 1-gerbes described in this paper, and we are presently studying this issue . For gerbes of higher degree, a precise understanding in terms of sheaves of multicategories is not yet known, and so one can only make somewhat more limited remarks . ### 4.1 Basics of discrete torsion Discrete torsion was originally discovered as an ambiguity in the choice of phases of different twisted sector contributions to partition functions of orbifold string theories. The possible inequivalent choices of phases are counted by elements of the group cohomology group<sup>17</sup><sup>17</sup>17Where the action of the group $`\mathrm{\Gamma }`$ on the coefficients $`U(1)`$ is trivial. $`H^2(\mathrm{\Gamma },U(1))`$, where $`\mathrm{\Gamma }`$ is the orbifold group. Since its discovery, discrete torsion has been considered a rather mysterious quantity. Our description of discrete torsion essentially boils down to the observation that discrete torsion is the analogue of orbifold Wilson lines for 2-form-fields rather than vectors. Recall orbifold Wilson lines could be described as a (discrete) ambiguity in lifting the orbifold action on a space to a bundle with connection. A similar ambiguity arises in lifting orbifold actions to gerbes with connection, and this ambiguity is partially measured by $`H^2(\mathrm{\Gamma },U(1))`$ . More precisely, in general the set of lifts of orbifold actions (in more technical language, the set of equivariant structures on a (1-)gerbe with connection) is (noncanonically) isomorphic to a group which includes $`H^2(\mathrm{\Gamma },U(1))`$, viewed as a set rather than a group. In special cases, such as trivial gerbes, there exists a canonical isomorphism. Just as for bundles, not all nontrivial gerbes admit lifts of orbifold group actions. We shall not attempt to study conditions under which a nontrivial gerbe admits such a lift; rather, we shall simply assume that lifts exist in all examples in this paper. How does this description of discrete torsion as an analogue of orbifold Wilson lines mesh with the original definition in terms of distinct phases associated to twisted sectors of string partition functions? Recall there are factors $$\mathrm{exp}\left(\varphi ^{}B\right)$$ (7) in the partition function, contributing the holonomy of the $`B`$-field. (We have used $`\varphi `$ to denote the embedding map $`\varphi :\mathrm{\Sigma }X`$ of the worldsheet into spacetime.) Because of these holonomy factors, distinct lifts of the orbifold action to the 1-gerbe with connection (i.e., distinct equivariant structures on the 1-gerbe with connection) yield distinct phases in the twisted sectors of orbifold partition functions – we recover the original description of discrete torsion . Put another way, we do not just derive some set of discrete degrees of freedom that happen coincidentally to also be measured by $`H^2(\mathrm{\Gamma },U(1))`$; the discrete degrees of freedom we recover necessarily describe discrete torsion. The passage from lifts of orbifold actions to phase factors is provided by the partition function factors (7). In passing, we should mention that the phase factors (7) were the original reason that discrete torsion, viewed as a set of phases of twisted sector contributions to partition functions, was associated with $`B`$-fields at all . In some sense, our description of discrete torsion is a natural outgrowth of some of the original ideas in . In passing, we should also briefly speak to discrete torsion on D-branes as discussed in . In those references, D-branes on orbifolds with discrete torsion were argued to be described by specifying a projective representation of the orbifold group on the bundle on the worldvolume. We believe (although we have not checked in total detail) that this can be derived from our description of discrete torsion, using the interconnection between $`B`$-field backgrounds and bundles on worldvolumes of D-branes, as recently described in \[36, section 6\]. It is one thing to classify possible lifts of orbifold actions to gerbes with connection; it is quite another to describe precisely the characteristic classes and holonomies of the resulting gerbe on the quotient space. In principle, both could be determined as for orbifold Wilson lines: for a gerbe on a space $`X`$, with orbifold action $`\mathrm{\Gamma }`$, construct a gerbe on the space $`E\mathrm{\Gamma }\times _\mathrm{\Gamma }X`$, such that the projection to $`X/\mathrm{\Gamma }`$ yields the quotient gerbe, analogously to the program pursued in . We shall not pursue this program here. Suppose the (discrete) orbifold group $`\mathrm{\Gamma }`$ acting on $`X`$ acts freely, i.e., without fixed points. In section 2.1, we studied moduli spaces of flat connections on quotient spaces, in order to gain insight into orbifold Wilson lines. In particular, we argued that (for bundles admitting flat connections) orbifold $`U(1)`$ Wilson lines could be understood directly in terms of extra elements of $`\text{Hom}(\pi _1,U(1))`$ in the quotient space, for $`\mathrm{\Gamma }`$ a freely-acting discrete group. In other words, in this case orbifold Wilson lines were precisely Wilson lines on the quotient space. We shall perform analogous calculations here. (In the next few paragraphs we shall implicitly only consider flat $`n`$-gerbes – but only for the purposes of performing illuminating calculations. We do not make such assumptions elsewhere.) For gerbes, the interpretation is slightly more obscure. First, note that in the case $`\mathrm{\Gamma }`$ acts freely, we have a principal $`\mathrm{\Gamma }`$ bundle $`\mathrm{\Gamma }XX/\mathrm{\Gamma }`$, so we can apply the long exact sequence for homotopy to find that $`\pi _n(X)=\pi _n(X/\mathrm{\Gamma })`$ for $`n>1`$ and $`\mathrm{\Gamma }`$ discrete. In other words, although the fundamental group of $`X/\mathrm{\Gamma }`$ received a contribution from $`\mathrm{\Gamma }`$, the higher homotopy groups of $`X/\mathrm{\Gamma }`$ are identical to those of $`X`$. Thus, the higher-dimensional analogues of orbifold Wilson lines (for flat $`n`$-gerbes) can not correspond to extra elements of homotopy groups. We shall find, rather, that the higher-dimensional analogues correspond to extra elements of $$\text{Hom}_𝐙(H_n(X/\mathrm{\Gamma }),U(1))$$ We shall now describe the homology of the quotient $`X/\mathrm{\Gamma }`$. One way to compute the homology of the quotient space $`X/\mathrm{\Gamma }`$ is as the limit of the Cartan-Leray spectral sequence \[37, section VII.7\] $$E_{p,q}^2=H_p(\mathrm{\Gamma },H_q(X))$$ (8) Note that the group homology appearing in the definition above has the property that, in general, $`\mathrm{\Gamma }`$ acts nontrivially<sup>18</sup><sup>18</sup>18An example should make this clear. Let $`X`$ be the disjoint union of 2 identical disks, and let $`\mathrm{\Gamma }`$ be a $`𝐙_2`$ exchanging the two disks. Then $`H_0(X)=𝐙^2`$, and $`\mathrm{\Gamma }`$ exchanges the two $`𝐙`$ factors, i.e., $`\mathrm{\Gamma }`$ acts nontrivially on $`H_0(X)`$. on the coefficients $`H_q(X)`$, even if $`\mathrm{\Gamma }`$ acts freely on $`X`$. In special cases, the homology of $`X/\mathrm{\Gamma }`$ can be computed more directly. For example, for any path-connected space $`Y`$, for any $`n>1`$ such that $`\pi _r(Y)=0`$ for all $`1<r<n`$, we have that \[38, theorem II\] the following sequence is exact: $$0\pi _n(Y)H_n(Y)H_n(\pi _1(Y),𝐙)\mathrm{\hspace{0.25em}0}$$ where the group homology $`H_n(\pi _1(Y),𝐙)`$ is defined by the group $`\pi _1(Y)`$ having trivial action on the coefficients $`𝐙`$. Using the results above, we find that for path-connected $`X`$ such that $`\pi _r(X)=0`$ for $`1<r<n`$ for some $`n>1`$, the following sequence is exact: $$0\pi _n(X)H_n(X/\mathrm{\Gamma })H_n(\pi _1(X/\mathrm{\Gamma }),𝐙)\mathrm{\hspace{0.25em}0}$$ In the special case that $`\pi _1(X)=0`$, we can rewrite this as $$0\pi _n(X)H_n(X/\mathrm{\Gamma })H_n(\mathrm{\Gamma },𝐙)\mathrm{\hspace{0.25em}0}$$ Moreover, using the Hurewicz theorem, applying the functor $`\text{Hom}_𝐙(,U(1))`$, and using a relevant universal coefficient theorem, we can rewrite the short exact sequence above as<sup>19</sup><sup>19</sup>19This result has been independently derived, using other methods, by P. Aspinwall. $$0H^n(\mathrm{\Gamma },U(1))\text{Hom}_𝐙(H_n(X/\mathrm{\Gamma }),U(1))\text{Hom}_𝐙(H_n(X),U(1))\mathrm{\hspace{0.25em}0}$$ (Technically we are also assuming that $`X/\mathrm{\Gamma }`$ is a path-connected space.) From the calculation above we can extract two important lessons. First, for $`n=2`$, we see that (in special cases), the holonomy of a $`B`$-field (of a flat 1-gerbe) on the quotient $`X/\mathrm{\Gamma }`$, as measured by $`\text{Hom}(H_2(X/\mathrm{\Gamma }),U(1))`$, differs from the possible holonomies on the covering space by $`H^2(\mathrm{\Gamma },U(1))`$, and so we can understand discrete torsion in such cases as being precisely the extra contribution to $`\text{Hom}(H_2(X),U(1))`$ on the quotient. (More generally, the precise relationship between group cohomology and holonomies of $`B`$-fields is described by the spectral sequence (8).) The second, and more basic, lesson we can extract from the calculation above is that it is quite reasonable to believe that there exist analogues of discrete torsion and orbifold Wilson lines for the higher-ranking tensor fields appearing in supergravity theories, and that those analogues of discrete torsion should be measured by higher-degree group cohomology $`H^n(\mathrm{\Gamma },U(1))`$. We shall return to this point later. In this paper, we only derive<sup>20</sup><sup>20</sup>20Our derivation in is not restricted to flat 1-gerbes; the restriction to flat 1-gerbes in the previous few paragraphs was for purposes of making illustrative calculations only. We should also mention that our derivation in does not assume $`\mathrm{\Gamma }`$ is freely acting, or that it is abelian – our derivation holds equally well regardless. discrete torsion for $`B`$-fields, and in so doing find $`H^2(\mathrm{\Gamma },U(1))`$. However, our general methods should apply equally well to higher-ranking tensor fields, and it is extremely tempting to conjecture that the analogue of discrete torsion for an $`n`$-gerbe is measured by $`H^{n+1}(\mathrm{\Gamma },U(1))`$. Before we go on to outline how discrete torsion can be derived, we shall mention that in this paper, when speaking of an $`n`$-gerbe on a space $`X`$, we shall assume that $`X`$ has ($`𝐑`$) dimension at least $`n`$. ### 4.2 Equivariant gerbes In this section we shall try to give some intuitive understanding of the classification of equivariant structures on 1-gerbes, that is, the classification of lifts of the orbifold action to 1-gerbes. More precisely, we shall study what equivariant structures on 1-gerbes mean at the level of transition functions for 1-gerbes. We shall not be able to rigorously derive results on equivariant gerbes in this fashion – such derivations are instead given in . However, we hope that this approach should give the reader some intuitive understanding of our results, without requiring them to gain a detailed understanding of 1-gerbes in terms of stacks. Let $`𝒞`$ denote a 1-gerbe on a space $`X`$, and let $`\mathrm{\Gamma }`$ denote a group acting on $`X`$ by homeomorphisms. Let $`\{U_\alpha \}`$ be a “good invariant” cover of $`X`$ – namely, a cover such that each $`U_\alpha `$ is invariant under $`\mathrm{\Gamma }`$ and each $`U_\alpha `$ is a disjoint union of contractible open sets. (For example, we can often obtain such a cover as the inverse image of a good cover on the quotient $`X/\mathrm{\Gamma }`$.) Note that a good invariant cover is not usually a good cover. In order to define $`𝒞`$ at the level of transition functions for the cover $`\{U_\alpha \}`$, recall we need to specify a line bundle $`_{\alpha \beta }`$ on each overlap $`U_{\alpha \beta }U_\alpha U_\beta `$, and an explicit trivialization $`\theta _{\alpha \beta \gamma }:U_{\alpha \beta \gamma }U(1)`$ of $`_{\alpha \beta }_{\beta \gamma }_{\gamma \alpha }`$ on $`U_{\alpha \beta \gamma }`$. Now, let us describe how one defines an equivariant structure on the 1-gerbe $`𝒞`$ at the level of transition functions. First, we need $`g^{}_{\alpha \beta }_{\alpha \beta }`$ for all $`g\mathrm{\Gamma }`$. Let $`\psi _{\alpha \beta }^g:_{\alpha \beta }\stackrel{}{}g^{}_{\alpha \beta }`$ denote a specific choice of isomorphism. Since $`\{U_\alpha \}`$ is a good invariant cover of $`X`$, we can represent each $`\psi _{\alpha \beta }^g`$ by a function $`\nu _{\alpha \beta }^g:U_{\alpha \beta }U(1)`$. Note that the $`\theta _{\alpha \beta \gamma }`$ necessarily now obey $$g^{}\theta _{\alpha \beta \gamma }(=\theta _{\alpha \beta \gamma }g)=\theta _{\alpha \beta \gamma }\nu _{\alpha \beta }^g\nu _{\beta \gamma }^g\nu _{\gamma \alpha }^g$$ (9) Before going on, we should pause to derive an implication of equation (9). Let $`\nu _{\alpha \beta }^g`$ and $`\overline{\nu }_{\alpha \beta }^g`$ denote a pair of maps (partially) defining equivariant structures on $`𝒞`$. Define $$\gamma _{\alpha \beta }^g\frac{\nu _{\alpha \beta }^g}{\overline{\nu }_{\alpha \beta }^g}$$ then the $`\gamma _{\alpha \beta }^g`$ satisfy $$\gamma _{\alpha \beta }^g\gamma _{\beta \gamma }^g\gamma _{\gamma \alpha }^g=\mathrm{\hspace{0.25em}1}$$ for all $`g\mathrm{\Gamma }`$, and so define transition functions for a bundle on $`X`$ we shall denote $`T_g`$. Thus, even though we have not finished describing equivariant structures on the 1-gerbe $`𝒞`$ at the level of transition functions, we can already derive the fact that any two equivariant structures will differ by, among other things, a set of principal $`U(1)`$-bundles $`T_g`$, one for each $`g\mathrm{\Gamma }`$. Before we can claim to have defined an equivariant structure on the transition functions for $`𝒞`$, we need to fill in a few more details. In particular, how do the $`\nu `$ behave under composition of actions of elements of $`\mathrm{\Gamma }`$? We shall demand that for any pair $`g_1,g_2\mathrm{\Gamma }`$, $$(\nu _{\alpha \beta }^{g_2})g_2^{}(\nu _{\alpha \beta }^{g_1})=(\nu _{\alpha \beta }^{g_1g_2})h(g_1,g_2)_\alpha h(g_1,g_2)_\beta ^1$$ (10) for some functions $`h(g_1,g_2)_\alpha :U_\alpha U(1)`$. We shall also demand that the functions $`h(g_1,g_2)_\alpha `$ satisfy $$h(g_1,g_2)_\alpha h(g_1g_2,g_3)_\alpha =h(g_2,g_3)_\alpha h(g_1,g_2g_3)_\alpha $$ (11) These constraints probably seem relatively unnatural to the reader. In our discussion of equivariant gerbes in terms of stacks, we shall show how these constraints (or, rather, their more complete versions for stacks) are quite natural. We can attempt to rewrite equations (10) and (11) somewhat more invariantly in terms of the line bundles $`_{\alpha \beta }`$ on overlaps $`U_{\alpha \beta }=U_\alpha U_\beta `$. Recall that $`\nu _{\alpha \beta }^g`$ is the local trivialization representation of the bundle morphism $`\psi _{\alpha \beta }^g:_{\alpha \beta }g^{}_{\alpha \beta }`$, then equation (10) states that the two bundle morphisms $$(g_2^{}\psi _{\alpha \beta }^{g_1})\psi _{\alpha \beta }^{g_2}:_{\alpha \beta }(g_1g_2)^{}_{\alpha \beta }$$ and $$\psi _{\alpha \beta }^{g_1g_2}:_{\alpha \beta }(g_1g_2)^{}_{\alpha \beta }$$ are related by a gauge transformation on $`(g_1g_2)^{}_{\alpha \beta }`$ defined by $`h(g_1,g_2)_\alpha h(g_1,g_2)_\beta ^1`$, i.e., $$(g_2^{}\psi _{\alpha \beta }^{g_1})\psi _{\alpha \beta }^{g_2}=\kappa \psi _{\alpha \beta }^{g_1g_2}$$ where $`\kappa :(g_1g_2)^{}_{\alpha \beta }(g_1g_2)^{}_{\alpha \beta }`$ is the gauge transformation defined by the function $`h(g_1,g_2)_\alpha h(g_1,g_2)_\beta ^1`$ on $`U_{\alpha \beta }`$. Given two distinct equivariant structures on the same transition functions, labelled by $`\nu ,\overline{\nu }`$ and $`h,\overline{h}`$, if we define functions $$\omega (g_1,g_2)_\alpha \frac{h(g_1,g_2)_\alpha }{\overline{h}(g_1,g_2)_\alpha }$$ then from equation (10) we have the relation $$(\gamma _{\alpha \beta }^{g_2})g_2^{}(\gamma _{\alpha \beta }^{g_1})=(\gamma _{\alpha \beta }^{g_1g_2})\omega (g_1,g_2)_\alpha \omega (g_1,g_2)_\beta ^1$$ (12) The functions $`\omega (g_1,g_2)_\alpha `$ define local trivialization realizations of isomorphisms of principal $`U(1)`$-bundles. We denote these bundle isomorphisms by $`\omega _{g_1,g_2}`$, and so we can rewrite equation (12) more invariantly as the definition of $`\omega _{g_1,g_2}`$: $$\omega _{g_1,g_2}:T_{g_1g_2}\stackrel{}{}T_{g_2}g_2^{}T_{g_1}$$ Furthermore, from equation (11) we see that the bundles $`T_g`$ and isomorphisms $`\omega _{g_1,g_2}`$ are further related by $$\begin{array}{ccc}T_{g_1g_2g_3}& \stackrel{\omega _{g_1g_2,g_3}}{}& T_{g_3}g_3^{}T_{g_1g_2}\\ \text{ }\omega _{g_1,g_2g_3}\text{ }& & \text{ }\omega _{g_1,g_2}\text{ }\\ T_{g_2g_3}(g_2g_3)^{}T_{g_1}& \stackrel{\omega _{g_2,g_3}}{}& T_{g_3}g_3^{}(T_{g_2}g_2^{}T_{g_1})\end{array}$$ (13) So far we have argued that the difference between two equivariant structures on a 1-gerbe is determined by the data $`(T_g,\omega _{g_1,g_2})`$, where the $`\omega `$ are required to make diagram (13) commute. However, it should also be said that the bundles $`T_g`$ are only determined up to isomorphism. Given a set of principal bundle isomorphisms $`\kappa _g:T_gT_g^{}`$, we can replace the data $`(T_g,\omega _{g_1,g_2})`$ by the data $`(T_g^{},\kappa _{g_1g_2}\omega _{g_1,g_2}(\kappa _{g_2}g_2^{}\kappa _{g_1})^1)`$ and describe the same difference between equivariant structures. ### 4.3 Equivariant gerbes with connection To properly derive the classification of equivariant gerbes with connection at the level of transition functions is rather messy and not very illuminating, so instead we shall settle for outlining the main points. (A complete derivation, in terms of gerbes as stacks, can be found in , and a complete derivation at the level of transition functions, as well as related information, will appear in .) In the previous section, we argued that any two equivariant structures on a (1-)gerbe differ by a set of principal $`U(1)`$ bundles $`T_g`$ ($`g\mathrm{\Gamma }`$), together with appropriate bundle isomorphisms $`\omega _{g_1,g_2}`$, such that diagram (13) commutes, modulo isomorphisms of bundles. A gauge transformation of a gerbe with connection is defined by a principal $`U(1)`$-bundle with connection, so the reader should not be surprised to hear that the difference between two equivariant structures on a 1-gerbe with connection is defined by bundles $`T_g`$ with connection, together with connection-preserving maps $`\omega _{g_1,g_2}`$ such that diagram (13) commutes. Also, the connections on the bundles $`T_g`$ are constrained to be flat. Note that this structure is closely analogous to the discussion of orbifold $`U(1)`$ Wilson lines. In both cases, we find that the difference between two equivariant structures is determined by a set of gauge transformations, such that the gauge transformation associated to the product $`g_1g_2`$ is isomorphic to the product of the gauge transformations associated to $`g_1`$ and $`g_2`$. The constraint for bundles that the gauge transformations be constant becomes the present constraint that the gerbe gauge transformations defined by bundles with connection, must have a flat connection. Just as before, the bundles $`T_g`$ are only defined up to equivalence. We can replace any of the bundles $`T_g`$ with connection with an isomorphic bundle with connection (changing $`\omega _{g_1,g_2}`$ appropriately), and describe the same difference between equivariant structures. Where does $`H^2(\mathrm{\Gamma },U(1))`$ appear in this structure? Assume for simplicity that $`X`$ is connected. Take all the bundles $`T_g`$ to be topologically trivial, with gauge-trivial connections. Then, by replacing these bundles with isomorphic bundles, we can assume without loss of generality that each bundle $`T_g`$ is the canonical trivial bundle, with identically zero connection. The morphisms $`\omega _{g_1,g_2}`$ are now gauge transformations of the canonical trivial bundle, and since they must preserve the connection, they must be constant maps. From commutivity of diagram (13), it is clear that any set of $`\omega _{g_1,g_2}`$ defines a cocycle representative of an element of $`H^2(\mathrm{\Gamma },U(1))`$ (with trivial action of $`\mathrm{\Gamma }`$ on the coefficients $`U(1)`$), in the inhomogeneous representation. Now, we still can act on any of the $`T_g`$ by constant gauge transformations to get isomorphic equivariant structures, and it is easy to see that these define group coboundaries. Now, in general not every set of data $`(T_g,\omega _{g_1,g_2})`$ corresponds to topologically-trivial $`T_g`$ with gauge-trivial connection – the $`T_g`$ are only constrained to be flat, so it is not difficult to find new degrees of freedom that do not correspond to elements of $`H^2(\mathrm{\Gamma },U(1))`$. We shall discuss these further in . In special cases, such as trivial gerbes, there exist canonical trivial equivariant structures, and so elements of the group $`H^2(\mathrm{\Gamma },U(1))`$ can be identified with (some of) the equivariant structures. More generally, the identification of (some) equivariant structures with $`H^2(\mathrm{\Gamma },U(1))`$ is not canonical<sup>21</sup><sup>21</sup>21Technically, in general the set of equivariant structures on a gerbe with connection is merely a torsor.. ### 4.4 Analogues of discrete torsion So far in this paper we have outlined how orbifold Wilson lines and discrete torsion both appear as an ambiguity in the choice of orbifold group action on some tensor field potential. Although we have only concerned ourselves with vector fields and NS-NS $`B`$ fields, in principle analogous ambiguity exists for every tensor field potential appearing in string theory. Put another way, in general whenever one has a theory containing fields with gauge invariances, specifying an orbifold group action on the base space does not suffice to define the orbifold group action on the fields of the theory, as one can combine any orbifold group action with gauge transformations. For example, the other RR tensor field potentials of type II theories should also have analogues of discrete torsion, given as the ambiguity in the choice of orbifold group action on the fields. It has recently been pointed out that these fields should be understood in terms of K-theory, so given some Cheeger-Simons-type description of K-theory, one should be able to calculate the analogues of discrete torsion for these other fields. What might analogues of discrete torsion be for other tensor field potentials? There is an obvious conjecture. For vector fields, we found that the set of equivariant structures is a torsor under the group $`H^1(\mathrm{\Gamma },U(1))`$. For $`B`$ fields, we found that the set of equivariant structures is a torsor under a group which includes $`H^2(\mathrm{\Gamma },U(1))`$. Therefore, for a rank $`p`$ tensor field potential, it is tempting to conjecture that the set of equivariant structures is a torsor under some group which includes $`H^p(\mathrm{\Gamma },U(1))`$. We are presently studying this matter . The reader might well ask how such degrees of freedom could be seen in perturbative string theory. Orbifold Wilson lines and discrete torsion both crop up unavoidably; but how could one turn on analogues for RR fields? The answer surely lies in the description of RR field backgrounds in perturbative string theory. Judging from the results in, for example, , it seems reasonable to assume that one can understand Ramond-Ramond backgrounds in conformal field theory after coupling to the superconformal ghosts, so in principle analogues of discrete torsion for RR fields in conformal field theory might emerge when considering orbifolds of such backgrounds. Unfortunately, it might also be true that the RR analogues of discrete torsion are simply not visible in string perturbation theory. It is quite possible that there may also be certain analogues of modular invariance conditions for these analogues of discrete torsion. We have only discussed gerbes in isolation, whereas in type II theories, the gerbes interact with one another (and so cannot really be understood as gerbes). It is quite conceivable that, in order for any given orbifold to define a symmetry of the full physical theory, there are nontrivial constraints among analogues of discrete torsion for various gerbes. We have nothing particularly concrete to say on this matter, though we hope to return to it in . It is not clear, however, whether all analogues of modular invariance conditions can be described in this fashion. For example, in it was argued that there existed a constraint on orbifold Wilson lines associated to the IIA RR 1-form, arising nonperturbatively. (We are referring to the so-called “black hole level matching” of that reference.) Unfortunately we are not able to address the existence and interpretation of such constraints. ## 5 Conclusions In this paper we have given a geometric description of discrete torsion, as a precise analogue of orbifold Wilson lines. Put another way, we have described discrete torsion as “orbifold Wilson surfaces.” After giving a mathematically precise discussion of orbifold Wilson lines, we outlined how the classification of orbifold Wilson lines (as equivariant structures on bundles with connection) could be extended to discrete torsion (as equivariant structures on 1-gerbes with connection). Although we outlined how this result on discrete torsion was proven, we have deferred a rigorous examination to . ## 6 Acknowledgements We would like to thank P. Aspinwall, D. Freed, A. Knutson, D. Morrison, and R. Plesser for useful conversations, and E. Silverstein for pointing out reference . We would also like to thank J. Stasheff for extremely extensive comments on the first version of this paper, and we would like to apologize for taking so long to implement his suggestions. ## Appendix A Review of group cohomology For a complete technical overview of group cohomology, the standard reference is . For much shorter and more accessible accounts, we recommend \[46, section IV.4\] and . Let $`G`$ and $`M`$ be groups, $`M`$ abelian, with a (possibly trivial) action of $`G`$ on $`M`$ by group automorphisms. We shall assume that the action of $`G`$ commutes with the group operation of $`M`$ on itself. Define $`C^n(G,M)`$ to be the set of all maps $$ϵ:G\times \mathrm{}\times G=G^{n+1}M$$ such that $`ϵ(gg_0,gg_1,\mathrm{},gg_n)=gϵ(g_0,g_1,\mathrm{},g_n)`$ for all $`g,g_iG`$. (This representation of the cochains is known as a homogeneous representation, because of the obvious analogy with projective spaces.) Define a coboundary operator $`\delta :C^n(G,M)C^{n+1}(G,M)`$ by $$(\delta ϵ)(g_0,\mathrm{},g_{n+1})=\underset{k=0}{\overset{n+1}{}}()^kϵ(g_0,\mathrm{},\widehat{g_k},\mathrm{},g_{n+1})$$ Note that $`\delta ^2ϵ=1`$. Define $`Z^n(G,M)`$ to be the set of cocycles, that is, $`ϵ\text{ker }\delta C^n(G,M)`$. Define $`B^n(G,M)`$ to be the set of coboundaries, that is, $`ϵ\text{im }\delta C^n(G,M)`$. Then define the group cohomology to be $`H^n(G,M)=Z^n(G,M)/B^n(G,M)`$. There is an alternative presentation of group cohomology, which can be defined as follows. Given a cochain $`ϵC^n(G,M)`$, which is to say, a map $`G^{n+1}M`$, define a map $`\stackrel{~}{ϵ}:G^nM`$ as, $$\stackrel{~}{ϵ}(g_1,g_2,\mathrm{},g_n)=ϵ(e,g_1,g_1g_2,g_1g_2g_3,\mathrm{},g_1g_2\mathrm{}g_n)$$ This is known as an inhomogeneous representation, that is, these are called inhomogeneous cochains. It is then easy to demonstrate that $`(\delta \stackrel{~}{ϵ})(g_1,g_2,\mathrm{},g_{n+1})`$ $`=`$ $`g_1\stackrel{~}{ϵ}(g_2,\mathrm{},g_{n+1})`$ $`+{\displaystyle \underset{k=1}{\overset{n}{}}}()^k\stackrel{~}{ϵ}(g_1,g_2,\mathrm{},g_kg_{k+1},\mathrm{},g_{n+1})`$ $`+()^{n+1}\stackrel{~}{ϵ}(g_1,g_2,\mathrm{},g_n)`$ In the group cohomology appearing in this paper, and to our knowledge in the physics literature to date<sup>22</sup><sup>22</sup>22For example, experts should note that it is this latter, inhomogeneous form, restricted to the special case that the action of $`G`$ on $`M`$ is trivial, which appears in ., we always assume that the action of the group on the coefficients is trivial. When the action of $`G`$ on $`M`$ is assumed trivial, if $`ϵ:GM`$ is a homogeneous 0-cochain, then it is easy to check that $`ϵ`$ is constant. From the definitions of coboundaries for homogeneous and inhomogeneous cochains, it is easy to derive that the associated inhomogeneous 0-cochain $`\stackrel{~}{ϵ}`$ must always be the identity of $`M`$. To repeat, if $`\stackrel{~}{ϵ}`$ is an inhomogeneous 0-cochain, then $`\stackrel{~}{ϵ}=1M`$. As a consequence, for trivial action of $`G`$ on $`M`$, we have that $`H^1(G,M)=Z^1(G,M)`$, that is, $`H^1(G,M)`$ is precisely the set of group homomorphisms $`GM`$. For group 2-cochains (defined with trivial group action on the coefficients), there is a gauge choice that is often used. From manipulating the group 2-cocycle condition (in inhomogeneous representation), it is easy to check that $`\stackrel{~}{ϵ}(1,g)=\stackrel{~}{ϵ}(g,1)=\stackrel{~}{ϵ}(1,1)`$ for any $`g`$. For convenience, one often sets $`\stackrel{~}{ϵ}(1,1)=1`$ (just pick a group coboundary conveniently). Then, in this gauge, $`\stackrel{~}{ϵ}(1,g)=\stackrel{~}{ϵ}(g,1)=1`$ for all $`g`$. One is still free to add any group coboundary in this gauge, modulo the constraint that if $`\mu (g)`$ defines a group coboundary, one needs $`\mu (1)=1`$ in order to stay in the gauge. In passing, we shall mention that more formally, for any $`G`$-module $`M`$, we can define group cohomology as $$H^n(G,M)\text{Ext}_{𝐙[G]}^n(𝐙,M)$$ where $`𝐙[G]`$ is the free $`𝐙`$-module generated by the elements of $`G`$. In other words, any element of $`𝐙[G]`$ can be written uniquely in the form $$\underset{gG}{}a(g)g$$ where $`a(g)𝐙`$. This definition of group cohomology does not make any assumptions concerning the nature of the $`G`$-action on $`M`$. In addition to group cohomology, one can also define group homology in a very similar manner, though we shall not do so here. For the case of group homology and cohomology defined by groups with trivial actions on the coefficients, there exist precise analogues of the usual universal coefficient theorems for homology and cohomology \[37, exercise III.1.3\]. There is also a Künneth formula \[37, section V.5\]. For reference, we shall now list some commonly used group homology and cohomology groups. First, the homology groups $`H_i(𝐙_n,𝐙)`$, where the group $`𝐙_n`$ acts trivially on the coefficients $`𝐙`$, are given by $$H_i(𝐙_n,𝐙)=\{\begin{array}{cc}𝐙\hfill & i=0\hfill \\ 𝐙_n\hfill & i\text{ odd}\hfill \\ 0\hfill & i\text{ even},i>0\hfill \end{array}$$ The cohomology groups $`H^i(𝐙_n,U(1))`$, where the group $`𝐙_n`$ acts trivially on the coefficients $`U(1)`$, are given by $$H^i(𝐙_n,U(1))=\{\begin{array}{cc}U(1)\hfill & i=0\hfill \\ 𝐙_n\hfill & i\text{ odd}\hfill \\ 0\hfill & i\text{ even},i>0\hfill \end{array}$$ From the Künneth formula \[37, section V.5\], we find that the homology groups $`H_i(𝐙_n\times 𝐙_m,𝐙)`$, where the group acts trivially on the coefficients $`𝐙`$, are given by $$H_i(𝐙_n\times 𝐙_m,𝐙)=\{\begin{array}{cc}𝐙\hfill & i=0\hfill \\ 𝐙_n𝐙_m_{(i1)/2}\text{Tor}_1^𝐙(𝐙_n,𝐙_m)\hfill & i\text{ odd}\hfill \\ _{i/2}\left(𝐙_n_𝐙𝐙_m\right)\hfill & i\text{ even},i>0\hfill \end{array}$$ In other words, $`H_0(𝐙_n\times 𝐙_m,𝐙)`$ $`=`$ $`𝐙`$ $`H_1(𝐙_n\times 𝐙_m,𝐙)`$ $`=`$ $`𝐙_n𝐙_m`$ $`H_2(𝐙_n\times 𝐙_m,𝐙)`$ $`=`$ $`\left(𝐙_n_𝐙𝐙_m\right)`$ $`H_3(𝐙_n\times 𝐙_m,𝐙)`$ $`=`$ $`𝐙_n𝐙_m\text{Tor}_1^𝐙(𝐙_n,𝐙_m)`$ $`H_4(𝐙_n\times 𝐙_m,𝐙)`$ $`=`$ $`\left(𝐙_n_𝐙𝐙_m\right)\left(𝐙_n_𝐙𝐙_m\right)`$ and so forth. Using the identities $`𝐙_n_𝐙𝐙_m`$ $`=`$ $`𝐙_{\mathrm{𝑔𝑐𝑑}(n,m)}`$ $`\text{Tor}_1^𝐙(𝐙_n,𝐙_m)`$ $`=`$ $`𝐙_{\mathrm{𝑔𝑐𝑑}(n,m)}`$ and the appropriate universal coefficient theorem, one can compute the cohomology groups $`H^i(𝐙_n\times 𝐙_m,U(1))`$, where the group $`𝐙_n\times 𝐙_m`$ is assumed to act trivially on the coefficients $`U(1)`$: $$H^i(𝐙_n\times 𝐙_m,U(1))=\{\begin{array}{cc}U(1)\hfill & i=0\hfill \\ 𝐙_n𝐙_m_{(i1)/2}𝐙_{\mathrm{𝑔𝑐𝑑}(n,m)}\hfill & i\text{ odd}\hfill \\ _{i/2}𝐙_{\mathrm{𝑔𝑐𝑑}(n,m)}\hfill & i\text{ even},i>0\hfill \end{array}$$ In other words, $`H^0(𝐙_n\times 𝐙_m,U(1))`$ $`=`$ $`U(1)`$ $`H^1(𝐙_n\times 𝐙_m,U(1))`$ $`=`$ $`𝐙_n𝐙_m`$ $`H^2(𝐙_n\times 𝐙_m,U(1))`$ $`=`$ $`𝐙_{\mathrm{𝑔𝑐𝑑}(n,m)}`$ $`H^3(𝐙_n\times 𝐙_m,U(1))`$ $`=`$ $`𝐙_n𝐙_m𝐙_{\mathrm{𝑔𝑐𝑑}(n,m)}`$ $`H^4(𝐙_n\times 𝐙_m,U(1))`$ $`=`$ $`𝐙_{\mathrm{𝑔𝑐𝑑}(n,m)}𝐙_{\mathrm{𝑔𝑐𝑑}(n,m)}`$ and so forth. Note that we have used the notation $`\times `$ and $``$ in this subsection interchangeably.
no-problem/9909/cs9909017.html
ar5iv
text
# Finding an ordinary conic and an ordinary hyperplane ## 1 Introduction Let $`𝒮`$ be a set of $`n`$ points in the plane. A connecting line of $`𝒮`$ is a line that passes through at least two of its points. A connecting line is said to be ordinary if it passes through exactly two points of $`𝒮`$. The problem of establishing the existence of such a line originated with Sylvester , who proposed the following problem in 1893: > If $`n`$ points in the plane are such that a line passing through any two of them passes through a third point, then are the points collinear? No solution came forth during the next forty years. In 1943, a positive version of the same problem was proposed by Erd s , and in the following year a solution by Gallai appeared in print. Subsequently other proofs also appeared, notable among which were the proofs by Steinberg and Kelly . These results showed that the answer is in the affirmative for real projective geometry in the plane. Therefore if the points of $`𝒮`$ are not collinear then there is at least one ordinary line. In fact, Kelly and Moser showed that there are at least $`3n/7`$ ordinary lines. A set of points is said to be co-conic if all the points lie on one conic. In this paper we address a more general version of the ordinary line problem: given a set of $`n`$ points in the plane that are not co-conic, find a conic that passes through exactly five points. Our algorithm provides a constructive proof of the existence of such a conic. Another proof is contained in . Our proof is very simple and allows us to relate a result on the number of ordinary lines to the number of ordinary conics. The paper is organized as follows. In the next section we discuss some mathematical preliminaries. The algorithm is discussed in the third section. We conclude in the fourth and final section. ## 2 Preliminaries ### 2.1 Notations and basic results #### Space of conics Let $`𝒮`$ be a set of $`n`$ points in $`IR^2`$. Let $`\varphi `$ be the transformation that maps a point $`p=(x,y)IR^2`$ to the point $`p^{}=(x^2,y^2,xy,x,y)IR^5`$. Under this transformation, $`IR_{}^{2}{}_{}{}^{}`$ is the 2 dimensional manifold image of $`IR^2`$ and $`𝒮^{}`$ the map of $`𝒮`$in $`IR^5`$ . If $`𝒞`$ is a conic in $`IR^2`$ with the equation $`ax^2+by^2+cxy+dx+ey+f=0`$, then $`\varphi (𝒞)=𝒞^{}`$ is the intersection of $`\varphi (IR^2)=IR_{}^{2}{}_{}{}^{}`$ with the hyperplane $`𝒞^v`$: $`au+bv+cw+dx+ey+f=0`$ in $`IR^5`$. We identify the conics of $`IR^2`$ to hyperplanes of $`IR^5`$, which can be called space of conics. This idea of mapping points in five dimensions is a natural generalization of the usual space of circles widely used in computational geometry that associates a circle $`𝒞`$ in the plane to a point $`𝒞^{}`$ in three-dimensions and to the polar hyperplane $`𝒞^v`$ of $`𝒞^{}`$ with respect to the unit paraboloid. #### Flats We recall a few basic results from finite-dimensional vector spaces. A flat $`F`$ is an affine subspace of $`IR^5`$ such that for any two points $`p,qF`$, $`\alpha p+\beta qF`$, where $`\alpha +\beta =1`$. A flat is defined by one of its point and its direction $`\stackrel{}{F}=\{pq,p,qF\}`$ which is a vectorial subspace of the vectorial space $`IR^5`$ (a flat is a set of points, its direction a set of vectors). Two subspaces $`\stackrel{}{F_1}`$ and $`\stackrel{}{F_2}`$ of $`IR^5`$ are called supplementary if and only if $`\stackrel{}{F_1}\stackrel{}{F_2}=\{0\}`$ and $`dim(S_1)+dim(S_2)=5`$. Two flats $`F_1`$ and $`F_2`$ having supplementary directions have an unique intersection point. If $`A`$ and $`B`$ are two subsets of $`IR^5`$, we define the affine hull $`AB`$ as the smallest flat that contains both $`A`$ and $`B`$. #### Point-hyperplane duality Point-hyperplane duality is a common transformation in computational geometry . A point $`p`$ at distance $`r`$ from the origin $`O`$ is associated with the hyperplane normal to $`Op`$ at distance $`1/r`$ from the origin. This transformation reduces the problem of computing the intersection of a finite set of half-spaces, each containing the origin, to the problem of computing the convex hull of the corresponding points in dual space. #### Inversion An inversive transformation maps a point $`p`$ at distance $`r`$ from the origin, $`O`$, to the point $`p^{}`$ at distance $`1/r`$ from $`O`$, lying on the half-line $`[Op)`$ . This involutary transformation has the interesting properties that the images of spheres and hyperplanes are spheres or hyperplanes. Particularly, spheres passing through $`O`$ are exchanged with hyperplanes. ### 2.2 Ordinary line For completeness, we briefly sketch the algorithm for finding an ordinary line in a finite set of non-collinear and coplanar points. Let $`l`$ be a directed line (direction $`\stackrel{}{v}`$) through exactly one point $`p_0`$ of $`𝒮`$. Let $`q_\lambda =p_0+\lambda \stackrel{}{v}`$. We find the line passing through at least two points of $`𝒮`$ that cuts $`l`$ in a point $`q_\lambda `$ with minimal $`\lambda >0`$. Such a line passes through two points consecutive in polar order around $`p_0`$ and can thus be found in $`O(n\mathrm{log}n)`$ time. Either this line is ordinary or a line through $`p_0`$ and a point on this line is ordinary. For details see Mukhopadhyay et al. . ## 3 Algorithm The idea behind the algorithm is to find a hyperplane that passes through exactly five points of $`𝒮^{}`$. In the $`IR^2`$ plane this corresponds to a conic that passes through exactly five points of $`𝒮`$. We first find a conic that passes through exactly three points of $`𝒮`$. We do this as follows. We choose $`p,q,r𝒮`$ and $`s,t𝒮`$ such that no three (four) of the five points are collinear. Denote by $`\stackrel{}{ı}`$ the vector $`(1,0)IR^2`$ and consider the conic $`𝒜_\theta `$ passing through the five points $`p,q,r,s,t+\theta \stackrel{}{ı}`$. For any point $`\rho 𝒮`$, there exist at most two values $`\theta =\theta _\rho `$ or $`\theta =\theta _\rho ^{}`$ such that $`\rho 𝒜_\theta `$. This is because if $$a_\theta x^2+b_\theta y^2+c_\theta xy+d_\theta x+e_\theta y+f_\theta =0$$ is the conic that passes through the points, $`p,q,r,s,t+\theta \stackrel{}{ı}`$ then each of the coefficients is of second degree in $`\theta `$. So it is easy to determine some $`\theta _0`$ different from all these values such that $`𝒜_{\theta _0}𝒮=\{p,q,r\}`$. Now the affine hull $`=p^{}q^{}r^{}`$ is a subset of the hyperplane $`𝒜_{\theta _0}^{}`$ and so is the affine hull spanned by the points $`p^{}`$, $`s^{}`$ and $`(t+\theta _0\stackrel{}{ı})^{}`$. Moreover, these two sets intersect in the single point $`p^{}`$ (see Figure 1). Let $`\gamma `$ be a point of $`IR^5`$ not in $`𝒜_{\theta _0}^{}`$. Translate the affine hull of the points, $`p^{}`$, $`s^{}`$ and $`(t+\theta \stackrel{}{ı})^{}`$ to pass through $`\gamma `$. Let $`𝒞`$ denote this translated affine hull. For any point $`\rho 𝒮\{p,q,r\}`$, we construct the point $`\rho ^{}=(\rho ^{})𝒞`$. The intersection is exactly one point because the directions of the flats $`\rho ^{}`$ and $`𝒞`$ are supplementary subspaces of the vectorial space $`IR^5`$. Otherwise, $`\rho ^{}`$ would belong to $`𝒜_{\theta _0}^{}`$ which is impossible since $`\rho 𝒜_{\theta _0}`$ by the definition of $`\theta _0`$. The set of points $`𝒮^{}=\left\{\rho ^{};\rho 𝒮\{p,q,r\}\right\}`$ lies in the two-dimensional plane $`𝒞`$. Let $`l`$ be a line in that plane and $``$ the hyperplane through $``$ and $`l`$. By construction $`\rho ^{}`$ if and only if $`\rho ^{}`$; indeed by the definition of $`\rho ^{}`$, the line $`(\rho ^{}\rho ^{})`$ cuts $``$ (in $`\rho ^{}`$) and thus an hyperplane $``$ cannot contains $``$ ($`\rho ^{}`$) and $`\rho ^{}`$ without containing $`\rho ^{}`$. Thus at this point we see a complete equivalence between the problem of finding an ordinary line for $`𝒮^{}`$ and an ordinary conic for $`𝒮`$ through the points $`p`$, $`q`$ and $`r`$. If all the points of $`𝒮^{}`$ are collinear, then all the points of $`𝒮^{}`$ are in the same hyperplane and hence all the points of $`𝒮`$ are co-conic. Otherwise, there exists an ordinary line $`l`$ in $`𝒞`$, and the corresponding hyperplane $``$ contains only five points of $`𝒮^{}`$. The corresponding conic is ordinary and passes through $`p,q,r`$. The following theorem is a consequence of the above discussion and the fact that there are at least $`\frac{3n}{7}`$ ordinary lines: > Theorem: Given a set $`𝒮`$ of $`n`$ points in the plane that are not co-conic, then for any three non collinear points in $`𝒮`$, there exist at least $`\frac{3(n3)}{7}`$ ordinary conics of $`𝒮`$ that pass though these three points. Furthermore, such an ordinary conic can be found in $`O(n\mathrm{log}n)`$ time. ## 4 Ordinary plane in three dimensions Given a set $`𝒮`$ of $`n(3)`$ points in three space, a connecting plane (that is, a plane through some three points of $`𝒮`$) is defined to be ordinary if all but one of the points of $`𝒮`$ that lie on it are collinear. Such a plane always exists, unless all the points of $`𝒮`$ are collinear. A plane that passes through exactly three points is certainly ordinary in the sense of this definition; however, such a plane need not exist. As an example, place three or more points on each of two skew lines in three space. This configuration of points has no connecting plane that is defined by exactly three points, all the ordinary planes contain one of the two lines, (see Motzkin ). We show that the ideas sketched in Section 2.2 can be generalized to three and higher dimensions to compute a plane that is ordinary in the sense of the above definition. Let $`p_0`$ be a point of $`𝒮`$ and $`\gamma `$ a line through $`p_0`$. Let $`\mathrm{\Lambda }`$ be a plane through three points $`p_1,p_2,p_3`$ of $`𝒮`$ such that its distance to $`p_0`$, measured along $`\gamma `$, is minimum among all possible connecting planes of $`𝒮`$ that intersect $`\gamma `$. If $`\mathrm{\Lambda }`$ is ordinary, we are done; otherwise, set $`g=\gamma \mathrm{\Lambda }`$ and let $`\mathrm{\Gamma }`$ be an arbitrary plane containing $`\gamma `$; finally, set $`\lambda =\mathrm{\Lambda }\mathrm{\Gamma }`$. Let $`p_1`$, $`p_2`$, $`p_3`$ and $`p_4`$ be points of $`𝒮`$ in $`\mathrm{\Lambda }`$ such that no 3-tuple of the form $`p_1p_ip_j`$, $`i,j\{2,3,4\}`$, are collinear (such points exist in $`\mathrm{\Lambda }`$ since $`\mathrm{\Lambda }`$ is not ordinary). We consider the planes through $`p_0p_1`$ and $`p_2`$, $`p_3`$, $`p_4`$ respectively. Let $`L_2`$, $`L_3`$ and $`L_4`$ be their respective intersections with $`\mathrm{\Gamma }`$ and $`l_2`$, $`l_3`$ and $`l_4`$ their respective intersections with $`\lambda `$. Assume, without loss of generality, that $`l_2`$ is separated from $`g`$ along $`\lambda `$ by $`l_3`$ or $`l_4`$. Then we claim that the plane determined by the points $`p_0,p_1,p_2`$ is ordinary. That is all the points, barring $`p_0`$, are on the line determined by $`p_1,p_2`$. If not, let $`q`$ be a point, distinct from $`p_0`$, lying outside this line. In fact, if $`q\mathrm{\Lambda }L_2`$, one of the two planes $`qp_1p_3`$ or $`qp_1p_4`$ must pass between $`g`$ and $`p_0`$, contradicting the definition of $`\mathrm{\Lambda }`$. The dashed lines on Figure 2 show the different cases for the intersection of the plane with $`\mathrm{\Gamma }`$ depending of the position of $`q^{}=(p_1q)L_2`$. It remains to find $`\mathrm{\Lambda }`$ efficiently. It is clear that if we consider the cell in the arrangement of the $`O(n^3)`$ planes defined by points of $`𝒮\{p_0\}`$, that contains $`p_0`$, $`\mathrm{\Lambda }`$ is incident on the facet that is hit by $`\gamma `$. By a point-plane duality transformation, with $`p_0`$ as center, the cell containing $`p_0`$ is mapped into the convex hull of the $`O(n^3)`$ vertices of an arrangement of $`n1`$ planes (see paragraph 2.1). In two dimensions we have to compute the convex hull of the vertices of an arrangement of $`n1`$ lines and it is not difficult to see that a vertex can be on the convex hull only if the two lines have consecutive slopes (in the set of all slopes). In three dimensions, the phenomenon is similar if we consider the Gaussian diagram of the normals to the $`n1`$ planes (the convex hull of the unit normal vectors to $`n1`$ planes) . Three planes define a vertex of the convex hull only if their normal vectors define a face of the Gaussian diagram. Since the Gaussian diagram can be computed in $`O(n\mathrm{log}n)`$ time, we get the following result: > Theorem: If $`𝒮`$ a set of $`n`$ non-coplanar points in 3-space, then an ordinary plane can be found in $`O(n\mathrm{log}n)`$ time. The same ideas extend to higher dimensions. We can find an ordinary hyperplane with the help of a Gaussian diagram. The complexity is identical to the complexity of computing the convex hull in that dimension. ## 5 Conclusions In this note we have shown that an algorithm for finding a line through exactly two points of a given set of non-collinear points can be used to find a conic through exactly five points, if we assume that all the points are not co-conic. It is particularly easy to find an ordinary circle passing through a chosen point, $`p`$, of the given set of points, if we allow for a degenerate circle. We simply apply an inversion transformation with $`p`$ as the center of inversion (see paragraph 2.1). Solve the ordinary line problem for the remaining $`n1`$ transformed points. We have a degenerate circle if the ordinary line found passes through $`p`$, else its image is an ordinary circle passing through $`p`$. We also conclude that at least $`3(n1)/7`$ ordinary circles pass through a chosen point. By applying a stereographic projection we note that if $`n`$ points on a real sphere do not lie in the same plane then there is plane containing exactly three of them.
no-problem/9909/hep-lat9909115.html
ar5iv
text
# Order 𝑎 improved renormalization constants ## 1 Introduction In Reference , we studied the improvement of the Wilson-Dirac theory by removing all lattice artifacts linear in the lattice spacing $`a`$ in on-shell matrix elements. In general, such an improvement for the dimension 3 fermion bilinear operators requires one to tune the coefficient of the clover operator in the action, determine the coefficient of an extra term each in the vector, axial-vector, and tensor currents, and obtain the mass dependence of all the renormalization constants. We showed that all these improvement constants, as well as the renormalization constants for the axial and vector currents, can be determined by imposing the axial and vector Ward identities. The remaining renormalization constants, those for the scalar, pseudoscalar and tensor operators, are scheme dependent and cannot be determined by this method. The scheme dependence is, however, the same for both the scalar and the pseudoscalar operators, and the corresponding ratio of renormalization constants can again be obtained. To test the efficacy of this method, we had evaluated the renormalization constants at $`\beta =6.0`$ for the perturbative (tadpole improved) value of the clover coefficient, and found that these differed from the values determined by the ALPHA collaboration using a non-perturbatively tuned coefficient. It was not clear whether the differences arose from different $`O(a^2)`$ errors in the two calculation, or whether the difference in the clover coefficient was responsible for the variation observed. Here we report results of carrying out our procedure after changing the clover coefficient to that used by the ALPHA collaboration, and also at $`\beta =6.2`$, where the $`O(a^2)`$ errors are expected to be much smaller. For the details of the notation used, the numerical technique, a study of the consistency when the same constant is determined in multiple ways, and the choice of the method which determines each quantity best, we refer the reader to Reference . To avoid confusion, we repeat here that the coefficients $`\stackrel{~}{b}_X`$ we defined differ from the coefficients $`b_X`$ used by earlier authors. In particular, at the level of $`O(a)`$ improvement, one has $`\stackrel{~}{b}_X=(Z_A^0Z_S^0/Z_P^0)b_X`$. ## 2 Lattice parameters and Results The results of our calculation done at $`\beta =6.0`$ and $`6.2`$ are presented in Table 1 and compared to the previous estimates by the ALPHA collaboration and perturbation theory. In those cases where the coefficients $`b_X`$ and $`\stackrel{~}{b}_X`$ differ by more than the estimated error in our determination, we quote both for easier comparison with previous results. The $`\beta =6.0`$ calculations were done on $`16^3\times 48`$ lattices, with a sample of 83 configurations at $`c_{SW}=1.4755`$, and 125 configurations at $`c_{SW}=1.769`$. At $`\beta =6.2`$, we used 61 configurations with a lattice size of $`24^3\times 64`$. The region of axial rotation used for the Ward identities comprise the time-slices $`418`$ at $`\beta =6.0`$ and $`625`$ at $`\beta =6.2`$. Details of the quark masses and the lattice discretizations employed in the calculation will be presented elsewhere . ## 3 Dependence on $`\beta `$ and $`c_{SW}`$ The determination of $`c_A`$ is illustrative of the improvement we notice as we go to weaker couplings. This quantity is determined by requiring that the ratio $$\frac{_\stackrel{}{x}_\mu [A_\mu +ac_A_\mu P]^{(ij)}(\stackrel{}{x},t)J^{(ji)}(0)}{_\stackrel{}{x}P^{(ij)}(\stackrel{}{x},t)J^{(ji)}(0)}$$ (1) be independent of the time $`t`$ at which it is evaluated, up to errors of $`O(a^2)`$. Because this flatness criterion is automatically satisfied if the correlators are saturated by a single state, the determination is very sensitive to the small $`t`$ region. On the other hand, at very small $`t`$, the $`O(a^2)`$ errors dominate. We find that the region well fit by a constant starts at smaller values of $`t`$ at $`\beta =6.2`$, and that this results in smaller statistical errors on the determination of $`c_A`$. As $`c_A`$ feeds into the 3-pt Ward identity calculation, the statistical errors are typically smaller at the weaker coupling. In addition, the sensitivity to how the continuum derivatives are discretized on the lattice is reduced, leading to smaller systematic errors. In contrast, the proper choice of $`c_{SW}`$ at $`\beta =6.0`$ has a much smaller effect on the quality of the signal. We find that changing $`c_{SW}`$ from 1.4755 to 1.769 has little effect on the statistical and systematic errors but brings $`Z_X^0`$ and $`c_X`$ closer to their perturbative values. In addition, the differences between the various $`\stackrel{~}{b}_X`$, which are almost zero in perturbation theory, also decrease. ## 4 Comparison with previous results Different calculations of the non-perturbative renormalization constants are expected to differ because of residual $`O(a^2)`$ errors in the theory. The magnitude of these effects are expected to be $`O(\mathrm{\Lambda }_{QCD}^2a^2)`$ in $`Z_X^0`$ and $`O(\mathrm{\Lambda }_{QCD}a)`$ in $`c_X`$ and $`b_X`$. Numerically these are about 0.02 and 0.15 respectively at $`\beta =6.0`$, and 0.01 and 0.1 respectively at $`\beta =6.2`$. The differences between our values and those determined by the ALPHA collaboration are consistent with this qualitative expectation. In fact, the differences between the two determinations of $`Z_V^0`$ are 0.017(1) and 0.008(1) at $`\beta =6.0`$ and 6.2 respectively, exactly as expected from an $`O(a^2)`$ scaling. For $`Z_A^0`$, the differences of 0.020(17) and 0.009(13) at the two $`\beta `$ values are similar in magnitude, but have much larger errors. Of the coefficients $`b_X`$, only $`b_V`$ has been calculated by the ALPHA collaboration. The corresponding differences, 0.14(5) and 0.13(2), have large errors but are not inconsistent with the expected $`O(a)`$ scaling. The situation is less clear for the coefficients $`c_X`$. The ALPHA collaboration has computed only $`c_A`$ and $`c_V`$ and our results are completely consistent with theirs at $`\beta =6.2`$. The differences at $`\beta =6.0`$ are much larger in comparison and marginally significant. ## 5 Comparison with perturbation theory The perturbative results quoted in the table are based on one-loop tadpole improved perturbation theory and their errors are obtained by squaring the one-loop term in this tadpole improved series. We notice that the non-perturbatively determined $`Z_V^0`$ and $`Z_P^0/Z_S^0`$ are significantly lower than their perturbative values and all the $`\stackrel{~}{b}_X`$’s are higher. The coefficients $`c_X`$ are small, and except for $`c_A`$, agree with perturbation theory within errors. It is worth mentioning that the difference between the perturbative and non-perturbative $`Z_V^0`$ is about 0.046(1) at $`\beta =6.0`$ and 0.037(0) at $`\beta =6.2`$, where the errors are purely statistical. As an $`O(a^2)`$ contamination in our non-perturbative determination is expected to change by a factor of two between these two calculations, it cannot be a dominant contribution to the observed difference. Furthermore, noting that the tadpole improved $`\alpha _s`$ is about 0.13 and 0.12 in the two calculations, these differences are only about 2.5 times $`\alpha _s^2`$, not unreasonable in a slowly converging perturbative series. The pattern is similar for $`Z_P^0/Z_S^0`$, where the difference between the perturbative and non-perturbative results is about 0.09(2) at $`\beta =6.0`$ and 0.07(2) at $`\beta =6.2`$. Because of the larger errors, it is, however, not possible to rule out $`O(a^2)`$ artifacts in this case. Except for $`\stackrel{~}{b}_V`$, the non-perturbative values for $`\stackrel{~}{b}_X`$ are constant within errors as we move from $`\beta =6.0`$ to $`\beta =6.2`$. The leading non-perturbative errors in these quantities are $`O(a)`$, which should change by a factor of about 1.4 between these two couplings. On the other hand, the perturbative one-loop term accounts for only about a fourth of $`\stackrel{~}{b}_V1`$ and $`\stackrel{~}{b}_A1`$. Such a high estimate of $`\stackrel{~}{b}_X`$ with a correspondingly low value of $`Z_X^0`$ could indicate a problem with understanding the quark mass dependences in general: more study is needed to clarify this issue. ## 6 Conclusion We have demonstrated the feasibility of our method for determining the scheme-independent renormalization constants of the quark bilinear operators. Differences from previous calculations are consistent with the expected $`O(a^2)`$ differences. Perturbation theory, even when tadpole improved, seems to have $`O(\mathrm{few}\times \alpha _s^2)`$ residual errors for the chirally extrapolated renormalization constants. The improvement constants $`c_X`$ are small, but possibly somewhat larger than predicted by perturbation theory; a detailed comparison is still not possible due to the large errors. The mass dependences of the renormalization constants, given by $`\stackrel{~}{b}_X`$, are much larger than perturbation theory predicts.
no-problem/9909/astro-ph9909310.html
ar5iv
text
# Abundance Patterns in Planetary Nebulae ## 1. Introduction Massive stars ($`>`$8M) are the principal, and in most cases, the sole, source of elements beyond He. However, for the elements C and N, origins are more ambiguous. Intermediate-mass stars (IMS; 1$``$M$``$8 M) are hot enough in their cores and fusion shells to produce C via He-burning and N via the CNO cycle. Recent theoretical results indicate significant C and N production in IMS (van den Hoek & Groenewegen 1997 (VG); Marigo, Bressan, & Chiosi 1996 (MBC), 1998). Likewise, massive stars, too, synthesize and expel significant C and N (Woosley & Weaver 1995; Nomoto et al. 1997; Maeder 1992). Over the whole stellar mass range, the general conclusion is that N comes predominantly from IMS, while both IMS and massive stars contribute to C. We compare the set of abundances we determined for a sample of 20 PNe over a broad range in progenitor mass and metallicity with PN abundances predicted from stellar yield calculations of VG and MBC. ## 2. Abundance Calculations The heart of our method for determining abundances is the standard one in which abundances of observable ions for an element are first determined using a 5-level atom calculation for each ion. Then these ionic abundances are summed together and multiplied by an ionization correction factor (ICF) which adjusts the sum upward to account for unobservable ions. Finally, this product is in turn multiplied by a model-determined factor $`\xi `$ which acts as a final correction to our elemental abundance. Our modelling method has been discussed in detail most recently in Henry, Kwitter, & Dufour (1999). Our abundance results along with nebular diagnostics are contained in Henry, Kwitter, & Howard (1996), Kwitter & Henry (1996, 1998) and Henry & Kwitter (1999) Results for the entire sample are reported in full in Table 6 of Henry & Kwitter (1999). ## 3. IMS Nucleosynthesis: Models Versus Observations Compilations of observed abundances in PNe, such as those by Henry (1990) and Perinotto (1991) provide strong evidence that IMS synthesize He, C, and N. We can infer that directly by comparing abundance patterns in our PN sample with patterns in the interstellar medium, i.e. H II regions and stars. The two figures show log(C/O) and log(N/O) vs. 12+log(O/H), respectively, for our PN sample (filled diamonds) along with Galactic and extragalactic H II region data (open circles) compiled and described in Henry & Worthey (1999) and F and G star data (open triangles; left-hand figure only) from Gustafsson et al. (1999). Also shown are the positions for the sun (S; Grevesse et al. 1996), Orion (O; Esteban et al. 1998), and M8 (M; Peimbert et al. 1993; lef-hand figure only). Note in the left-hand figure that in contrast to the relatively close correlation between C and O displayed by the H II regions and stars, there is no such relation indicated for PNe. In fact the range in C is over 2.5 orders of magnitude, far greater than for the H II regions and stars and larger than can be explained by uncertainties in the abundance determinations. In addition, C levels in PNe appear on average higher than those typical of H II regions for the same O value, indicating that additional C above the general interstellar level present at the time these stars formed, was produced during their lifetimes. The right-hand figure shows similar behavior for N: H II regions seem to suggest a relation between N/O and O/H in the interstellar medium, yet we see no such pattern for PNe. Also, N/O tends to be systematically higher for PNe than for H II regions, again suggesting that N is produced by PN progenitors. These figures imply that C and N are synthesized in IMS; evidence from Ne/O strengthens this contention. Limited space prohibits inclusion of a similar figure that shows a constant value for Ne/O over a range in O abundance; the pattern displayed by PNe is indistinguishable from that of the H II regions. For a full discussion see Henry & Worthey (1999). ## 4. Comparison with Predicted Yields We used our PN abundance results to test the theoretical predictions of PN abundances; for details see Henry & Kwitter (1999). We tested two published sets of theoretical calculations. VG calculated a grid of stellar models ranging in mass fraction metallicity between 0.001 and 0.04 and progenitor mass of 0.8 to 8 M. Likewise, MBC calculated models for mass fraction metallicity of 0.008 and 0.02 for stars between 0.7 and 5 M. Both teams employed up-to-date information about opacities and mass loss to calculate yields for several isotopes, including <sup>4</sup>He, <sup>12</sup>C, <sup>13</sup>C, and <sup>14</sup>N. The progenitor metallicity range consistent with our results is between 1/20 solar and solar. We found that observed abundances of C vs. O are consistent with predictions for both high- and low- mass progenitors. At all metallicities the C abundance is initially predicted to rise with mass but then drop back to low values as mass continues to increase above 2-3 M. This reversal is the result of hot-bottom burning in stars with greater masses than this in which C from the 3rd dredge-up is converted to N at the base of the convective envelope late in the AGB stage. For N vs. O, the predicted behavior with progenitor mass is positively monotonic and is consistent with our abundances. Apparently the C and N abundances observed in PNe are consistent with progenitor masses in the range of 1-4 M. Consideration of N vs. He reinforces our conclusions about the PN progenitor mass range. Theoretical abundance predictions for progenitors in the 1-4 M range are consistent with our observations. The one extreme outlier is PB6, whose unusually high He abundance (He/H=0.20) needs to be confirmed independently. Our detailed comparison of observed PN abundances with predicted ones has demonstrated good agreement between the two and is indeed encouraging. To the extent to which predicted PN abundances are related in turn to the actual stellar yields, our comparison provides what we believe to be the best empirical support yet for the theoretical calculations. It is imperative, however, that these models continue to be tested with larger samples of PNe whose abundances have been carefully determined. As this is done, we will be better able to ascertain the exact role that intermediate mass stars play in the synthesis of C, N, and He in galaxies. ## 5. Summary * Abundances of C and N in PNe, when plotted against O, show a much broader range than H II regions and F and G stars and are generally higher. At the same time, both O and Ne display similar patterns in both PNe and H II regions. Taken together, these results support the idea the PN progenitors synthesize significant amounts of C and N. * Abundances of C, N and He found in our sample of PNe are consistent with model predictions. We believe that this is the first time that such a detailed comparison of observation and theory has been possible and that the results provide encouragement for the use of published yields of intermediate mass stars in studying galactic chemical evolution, especially in the cases of C and N. * Our comparisons of observed and predicted PN abundances support the occurrence of hot-bottom burning in stars above about 3.5-4 M. ### Acknowledgments. We are grateful to the support staff at KPNO for help in carrying out the observing portions of this program. This project was supported by NASA grant NAG 5-2389. ## References Esteban, C., Peimbert, M., Torres-Peimbert, S., & Escalante, V. 1998, MNRAS, 295, 401 Grevesse, N., Noels, A., & Sauval, A.J. 1996, in ASP Conf. Ser. 99, Cosmic Abundances, ed. S.S. Holt & G. Sonneborn (San Francisco: ASP), 117 Gustafsson, B., Karlsson, T., Olsson, E., Edvardsson, B., & Ryde, N. 1999, A&A, 342, 426 Henry, R.B.C. 1990, ApJ, 356, 229 Henry, R.B.C., Kwitter, K.B., & Dufour, R.J. 1999, ApJ, 517, 782 Henry, R.B.C., Kwitter, K.B., & Howard, J.W. 1996 ApJ, 458, 215 Henry, R.B.C., & Worthey, G. 1999, PASP, 111, 919 van den Hoek, L.B., & Groenewegen, M.A.T. 1997, A&AS, 123, 305 Henry, R.B.C., & Kwitter, K.B. 1999, ApJ, submitted Kwitter, K.B., & Henry, R.B.C. 1996, ApJ, 473, 304 Kwitter, K.B., & Henry, R.B.C. 1998, ApJ, 493, 247 Maeder, A. 1992, A&A, 264, 105 Marigo, P., Bressan, A., & Chiosi, C. 1996, A&A, 313, 545 Marigo, P., Bressan, A., & Chiosi, C. 1998, A&A, 331, 564 Nomoto, K., Hashimoto, M., Tsujimoto, T., Thielemann, F.-K., Kishimoto, N., Kubo, Y., & Nakasato, N. 1997, Nuc. Phys. A, A616, 79c Peimbert, M., Torres-Peimbert, S., & Dufour, R.J. 1993, ApJ, 418, 760 Perinotto, M. 1991, ApJS, 76, 687 Woosley, S.E., & Weaver, T.A. 1995, ApJS, 101, 181
no-problem/9909/astro-ph9909171.html
ar5iv
text
# Radio Sources in the 2dF Galaxy Redshift Survey. I. Radio source populations11footnote 1Based on data obtained by the 2dFGRS Team: J Bland–Hawthorn (AAO), R D Cannon (AAO), S Cole (Durham), M Colless (ANU, Australian convenor), C A Collins (Liverpool J Moores), W J Couch (UNSW), N Cross (St Andrews), G B Dalton (Oxford), K E Deeley (UNSW/AAO), R De Propris (UNSW), S P Driver (St Andrews), G Efstathiou (Cambridge), R S Ellis (Cambridge, UK convenor), S Folkes (Cambridge), C S Frenk (Durham), K Glazebrook (AAO), N J Kaiser (Hawaii), O Lahav (Cambridge), I J Lewis (AAO), S L Lumsden (Leeds), S J Maddox (Cambridge), S Moody (Cambridge), P Norberg (Durham), J A Peacock (Edinburgh), B A Peterson (ANU), I A Price (ANU), S Ronen (Cambridge), M Seabourne (Oxford), R Smith (Edinburgh), W J Sutherland (Oxford), H Tadros (Oxford), K Taylor (AAO). ## 1 Introduction A new generation of sensitive, large–area radio–source surveys at milliJanksy levels (1 Jy = 10<sup>-26</sup> W m<sup>-2</sup>Hz<sup>-1</sup>) is now becoming available. They include the FIRST survey (Becker et al. 1995), WENSS (Rengelink et al. 1997), NVSS (Condon et al. 1998) and SUMSS (Bock et al. 1999). These surveys offer some important advantages for cosmological studies – they reach sufficiently high source densities that detection of large–scale structure is possible (Cress et al. 1996, Magliocchetti et al. 1998), and also probe a second cosmologically–significant radio source population, that of star–forming galaxies, which are rarely seen in strong–source surveys. Deep radio surveys of a few small areas of sky at 1.4 GHz (Condon 1984, Windhorst et al. 1985; see also Condon 1992) have shown that there are two distinct populations of extragalactic radio sources. Over 95% of radio sources above about 50 mJy are classical radio galaxies and quasars (median redshift z$``$1) powered by active galactic nuclei (AGN), while the remainder are identified with star–forming galaxies (median z$``$0.1). The fraction of star–forming galaxies increases rapidly below 10 mJy, and below 1 mJy they begin to be the dominant population. The scientific return from radio continuum surveys is enormously increased if the optical counterparts of the radio sources can be identified and their redshift distribution measured. In the past, this has been a slow and tedious process which could only be carried out for relatively small samples. However, the Anglo–Australian Observatory’s Two–degree Field (2dF) spectrograph now makes it possible to carry out spectroscopy of several hundred galaxies simultaneously. Here, we describe the first step in this process – the identification of faint radio–source counterparts among galaxies whose spectra have been obtained in the 2dF Galaxy Redshift Survey (2dFGRS). The 2dFGRS (Colless 1999, Maddox 1998) is a large–scale survey of 250,000 galaxies covering 2000 square degrees in the southern hemisphere. The survey is designed to be almost complete down to a limiting apparent magnitude of $`b_\mathrm{J}=19.4`$. The median redshift of the galaxies is about 0.1 and the great majority have $`z<0.3`$. Spectra are being obtained using the 2dF multi–object fibre optic spectroscopic system on the 3.9m Anglo–Australian Telescope (AAT) (Lewis et al. 1998; Smith & Lankshear 1998). 2dF enables the AAT to obtain spectra for 400 objects simultaneously, spread over a field which is two degrees in diameter. The survey will cover two large strips of sky at high Galactic latitude, one each in the southern and northern Galactic hemispheres, plus outlying random fields in the south. The first test data were obtained in 1997 and the survey is expected to be substantially complete by the end of 2000. This paper presents the first results from what will eventually be a much larger study. When the 2dFGRS is complete, it will yield around 4000 good–quality spectra of galaxies associated with faint radio sources — by far the largest sample of radio–galaxy spectra ever compiled. Our aim in this paper is to present a first, qualitative exploration of the faint radio source population as observed by 2dF and the NVSS. Throughout this paper, we use H<sub>o</sub>=75 km/s/Mpc and q<sub>o</sub>=0.5. ## 2 The optical data Our data set comprises the thirty fields observed by the 2dFGRS team in November 1997 and January 1998. Twenty–three of these are in the southern Galactic hemisphere and the other seven in the north. They include a total of 8362 target galaxies brighter than $`b_\mathrm{J}`$ magnitude 19.4. Although each 2dF field covers about 3.14 square degrees, the total effective area covered by the 30 fields is somewhat less than 90 square degrees because there is some overlap between adjacent fields. Also, the fractional completeness of this early sample varies substantially from field to field. In many fields about 30% of the fibres were allocated to targets in a parallel QSO survey; the 2dFGRS uses a flexible tiling algorithm to deal with this and with the intrinsic variations in target density across the sky. When complete, the GRS will yield spectra for around 95% of all galaxies in the input catalogue. The current sample has variable levels of completeness (in terms of the surface distribution of all galaxies brighter than $`b_\mathrm{J}=19.4`$ mag.), though there should be no systematic effects depending on the magnitude, redshift or other properties of the galaxies. The standard observing pattern for the 2dFGRS is a set of three consecutive 20–minute exposures per field, together with calibration arcs and flat fields. The total exposure time is well–matched to the time required to reconfigure a second set of fibres for the next observation. Up to 380 galaxies can be observed simultaneously, with some 20 fibres allocated to sky. In many fields, however, the total number of galaxies is closer to 250 since a deep QSO survey is being carried out in parallel with the galaxy redshift survey. Using 300 lines/mm gratings, the 2dFGRS spectra cover the wavelength range 3800Å to 8000Å at a resolution of about 10Å. Most spectra have a signal–to–noise ratio (S/N) of 10 (per 4Å pixel) or better. Reliable redshifts are obtained for up to 95% of the targets in good observing conditions; the survey average is currently about 90% (Folkes et al. 1999). About 5% of the targets are found to be foreground stars; the original selection was for objects which appeared non–stellar in digitised data from UK Schmidt Telescope sky survey photographs, using a conservative criterion to minimise the number of galaxies missed. Figure 1 shows three spectra from the survey, and gives an idea of the typical quality of 2dFGRS spectra for galaxies with redshifts around $`z=0.15`$ and $`b_\mathrm{J}`$ magnitudes in the range 17.6–18.9. ## 3 The radio data The NRAO VLA Sky Survey (NVSS; Condon et al. 1998) is a 1.4 GHz (20 cm) radio imaging survey of the entire sky north of declination $`40^{}`$. The survey catalogue contains sources as weak as 2.5 mJy, and is essentially complete above 3.5 mJy. We used the NVSS source catalogue to identify candidate radio–emitting galaxies in the 2dFGRS. Subsets of the NVSS catalogue were extracted to match the RA and Dec range covered by each of the 2dF fields. The NVSS source density is roughly 60 per square degree, so each of these sub-catalogues contained about 200 radio sources. At this stage, we did not attempt to identify ‘double’ NVSS sources. However, we estimate (using an algorithm similar to that adopted by Magliocchetti et al. (1998) for identifying double radio sources in the FIRST survey) that these represent only a small fraction of NVSS sources (of order $`1`$%), so the presence of double sources does not significantly affect our results in this small sample. We compared the NVSS and 2dFGRS catalogues for each field and identified the galaxies for which there is a candidate radio ‘match’, i.e. an NVSS radio source lying within 15 arcsec of the position of a 2dF galaxy. The 15 arcsec limit was chosen because our earlier Monte Carlo tests using the COSMOS database suggested that most candidate matches of bright galaxies (i.e. $`b_\mathrm{J}<`$19.4 mag) with radio–optical separation up to 10 arcsec are real associations, together with a substantial fraction of those with offsets of 10–15 arcsec. The uncertainty in the NVSS radio source positions increases from a 1$`\sigma `$ error of 1–2 arcsec at 10mJy to 4–5 arcsec (and occasionally up to 10 arcsec) for the faintest (2–3mJy) sources (Condon et al. 1998). In addition, centroiding of bright radio sources with extended radio structure can be somewhat uncertain and may make optical identification difficult. Determining the optical centroid also becomes imprecise for large nearby galaxies. In these cases, overlaying the radio contours on an optical image usually makes it clear whether or not the candidate ID is correct. We found a total of 127 candidate matches in the 30 fields studied, i.e. 1.5% of the 8362 2dF galaxies in the survey area. Of these, 99 had radio–optical offsets of less than 10 arcsec. We also ran the matching program twice more, offsetting all the radio positions by 3 and 5 arcmin. Any matches produced from this ‘off–source’ catalogue should be chance coincidences, allowing us to estimate the number of matches expected purely by chance. Table 1 lists the results of this test, and shows the distribution of the offset D (difference between radio and optical positions) for matches found in the ‘on–source’ and ‘off–source’ tests. A further check comes from predicting the expected number of chance coincidences, based on the average surface densities of objects. Since 15 arcsec is 1/240 of the 1 degree radius of each 2dF field, and there are about 60 NVSS sources per square degree, the chance that a given GRS target will fall within 15 arcsec of an unrelated radio source is about 60$`\times \pi `$/240<sup>2</sup> (the resolution of the NVSS is such that at most one source can be identified with each optical galaxy). Thus about 27 chance coincidences are expected in a total of 8362 galaxies (see column 5 of Table 1). This calculation ignores the known clustering of galaxies on the sky, although this will only invalidate the result if there is significant clustering on scales comparable to the identification range of 15 arcsec, or if there are substantial differences between the spatial distribution of radio–loud and quiet galaxies. We believe these effects are unlikely to give an error of more than 10%. The results suggest that candidate matches with an offset of up to 10 arcsec are highly likely to be real associations. We therefore use a simple 10 arcsec cutoff in radio–optical position difference for the analysis which follows. This gives us 99 radio–detected galaxy matches in the 30 2dF fields. It also means that we have probably omitted about a dozen real identifications with larger offsets, but this is not a problem here since our aim is to make a first qualitative exploration of the faint radio galaxy population. ## 4 Two kinds of radio source: AGN/SF classification We classified each matched galaxy as either ‘AGN’ or ‘star–forming’ (SF) based on its 2dF spectrum. ‘AGN’ galaxies have either a pure absorption–line spectrum like that of a giant elliptical galaxy, or a stellar continuum plus nebular emission lines such as \[OII\] and \[OIII\] which are strong compared with any Balmer–line emission. Some of the emission–line AGN have spectra which resemble Seyfert galaxies. ‘SF’ galaxies are those where strong, narrow emission lines of H$`\alpha `$ and (usually) H$`\beta `$ dominate the spectrum. They include both nearby spirals and more distant IRAS galaxies. Figure 1 shows examples of spectra we classified as AGN, Seyfert and SF. Note that in this classification scheme, ‘AGN’ may simply denote the presence of radio emission, with no obvious optical signature. The origin of radio emission in the AGN and SF galaxies is believed to be quite different (e.g. Condon 1989), arising from non–thermal processes related to a central massive object in the AGN galaxies and from processes related to star formation (including supernova remnants, HII regions, etc.) in the SF galaxies. We are confident that this simple ‘eyeball’ classification of the 2dF spectra allows us to separate the AGN and SF classes accurately. Jackson & Londish (1999) measured several emission–line ratios (including \[OIII, $`\lambda `$5007\]/H$`\beta `$, \[NII, $`\lambda `$6584\]/H$`\alpha `$, \[OI, $`\lambda `$6300\]/H$`\alpha `$ and \[SII, $`\lambda \lambda `$6716,6731\]/H$`\alpha `$) for most of the galaxies studied here and plotted them on the diagnostic diagrams of Veilleux & Osterbrock (1987). They found that the ‘eyeball’ classifications and line–ratio based classifications agreed more than 95% of the time, and hence that ‘eyeball’ classifications can be used with confidence to analyse large samples of 2dF spectra. Most of the 2dF spectra are of impressively good quality. However, of the 98 spectra we examined (one galaxy was not actually observed by the 2dFGRS), nine had such a low signal–to–noise ratio that we were unable to classify the spectrum. One other object appeared to be a Galactic star. We were therefore left with 88 good–quality 2dF spectra of candidate radio matches with an offset D$`<`$10 arcsec. Of these 88 galaxies, 36 (41%) were classified as SF and 52 (59%) as AGN. One galaxy classified as SF had an emission–line spectrum which resembled an AGN, but was also detected as an IRAS source at 60 $`\mu `$m. This may be a genuinely composite object. Table 2 lists the matched galaxies, their spectral classification, 1.4 GHz radio continuum flux density, apparent magnitude and redshift. A more quantitative spectral classification using diagnostic emission–line ratios will be presented in the forthcoming paper by Jackson and Londish (1999). Figures 2 and 3 show the distribution of AGN and SF classes in apparent magnitude and redshift respectively. There is a clear segregation in apparent magnitude: most galaxies brighter than about $`b_\mathrm{J}`$ = 16.5–17 magnitude fall into the star–forming (SF) class, while the AGN class dominates the population fainter than $`b_\mathrm{J}17`$. This reflects strong differences in the global properties of the two classes as well as the radio and optical flux limits of the NVSS and 2dFGRS. The AGN galaxies are typically more distant than the SF galaxies (by about a factor of 3: Figure 3), and more luminous both optically and in radio power (see Figures 4 and 5). We know that the SF galaxies continue to large redshifts and to very faint optical magnitudes (e.g. Benn et al. 1993), but these galaxies quickly drop out of our sample because of the 2–3 mJy limit of the NVSS in radio flux density. Similarly, we know that the AGN galaxies extend to much higher redshifts than probed by the 2dFGRS, but these distant AGN galaxies will be fainter than the $`b_\mathrm{J}=19.4`$ mag optical limit of the 2dFGRS. Figure 6 shows plots of radio power and optical luminosity versus redshift for the AGN and SF classes — the solid lines correspond to the survey completeness limits of 3.5 mJy and 19.4 mag for the NVSS and 2dFGRS respectively. Galaxies below these lines will be excluded from our sample. Note that most of the SF galaxies are weak radio sources, lying close to the NVSS cutoff at all redshifts, while most of the AGN galaxies lie well above the radio limit but start to drop below the optical cutoff at redshifts above 0.15. ## 5 Matches with IRAS sources We expect many of the SF radio sources to be IRAS detections, based on the well–known correlation between radio continuum and far-infrared (FIR) luminosities (e.g. Wunderlich et al. 1987, Condon et al. 1991). For spiral galaxies, $`S_{60\mu \mathrm{m}}100S_{1.4\mathrm{GHz}}`$ (e.g. Condon & Broderick 1988, Rowan–Robinson et al. 1993), so NVSS should detect most galaxies in the IRAS Faint Source Catalog (which has a flux density limit of 0.28 Jy at $`60\mu \mathrm{m}`$). Of the 36 galaxies classified as SF in Table 2, two (TGN222Z132 and XGN221Z023) lie in the 3% of the sky which has no IRAS coverage (Beichman et al. 1985). Of the remaining 34 galaxies, 27 (i.e. 79%) are detected at 60$`\mu `$m in the IRAS Point Source Catalog or Faint Source Catalog (FSC). Figure 7 compares the radio continuum (1.4 GHz) and IRAS ($`60\mu \mathrm{m}`$) flux densities for these galaxies (for galaxies undetected by IRAS we show an upper limit of 0.28 Jy, corresponding to the completeness level of the FSC). If we exclude one galaxy with anomalously strong 60$`\mu `$m emission as discussed below, the mean FIR–radio ratio Q<sub>60</sub> = S<sub>60μm</sub>/S$`_{1.4GHz}`$ for the IRAS–detected galaxies is 112$`\pm `$8, i.e. close to that derived from other studies. One galaxy (TMS206Z015) has an unusually high value of Q<sub>60</sub>=380, with much stronger FIR emission than would be expected from the radio continuum flux density. The most likely explanation is confusion in the IRAS beam, since this galaxy lies in a group and appears to be interacting with a companion. ## 6 Conclusions Based on a study of 30 fields from the 2dF Galaxy Redshift Survey, we find that about 1.5% of 2dFGRS galaxies brighter than $`b_\mathrm{J}<19.4`$ magnitude are candidate identifications for 1.4 GHz NVSS radio sources. Of these about 80–85% will turn out, after closer examination, to be ‘real’ associations. Thus if the whole 2dFGRS contains 250,000 galaxies, we expect to identify about 4,000 candidate matches with NVSS by the time the 2dFGRS is complete. About 60% of these galaxies will be AGN (radio galaxies and some Seyferts) and 40% star–forming galaxies. For galaxies south of declination $`30^{}`$, we will also have 843 MHz radio flux density measurements from the Sydney University Molonglo Sky Survey (SUMSS; Bock et al. 1999), and will be able to measure radio spectral indices. The final sample will be by far the largest (and most homogeneous) sample of radio–galaxy spectra ever obtained, and will allow us to study both the AGN and starburst radio populations out to redshift $`z0.3`$, and to look for evidence of evolution over this redshift range. As the present paper goes to press, the 2dF Galaxy Redshift Survey is 20% complete and our sample has already grown to more than 700 galaxies. This larger sample will be analysed in more detail in a forthcoming paper. ## Acknowledgements We thank the 2dF Galaxy Redshift Survey team for allowing us early access to their data. We also acknowledge the essential contribution of the many people, at the AAO and elsewhere, who have contributed to the building and operation of the 2dF facility. Finally, we thank the two anonymous referees of this paper for their careful reading and helpful suggestions. The IRAS flux densities quoted in Table 2 were obtained from the NASA Extragalactic Database (NED). ## References Becker, R.H., White, R.L., Helfand, D.J. 1995 ApJ 450, 599 Beichman, C.A., Neugebauer, G., Habing, H.J., Clegg, P.E., Chester, T.J. 1985 IRAS Explanatory Supplement, JPL Benn, C.R., Rowan–Robinson, M., McMahon, R.G., Broadhurst, T.J., Lawrence, A. 1993 MNRAS 263, 98 Bock, D.C-J., Large, M.I., Sadler, E.M. 1999 AJ 117, 1578 Colless, M. 1999 Phil Trans R Soc Lond. A, 357, 105 (astro-ph/9804078) Condon, J.J. 1984 ApJ 287, 461 Condon, J.J., Broderick, J.J. 1988 AJ 96, 30 Condon, J.J. 1989 ApJ 338, 13 Condon, J.J., Anderson, M.L., Helou, G. 1991 ApJ 376, 95 Condon, J.J. 1992 ARAA 30, 575 Condon, J.J., Cotton, W.D., Greisen, E.W., Yin, Q.F., Perley, R.A., Taylor, G.B., Broderick, J.J. 1998 AJ 115, 1693 Cress, C.M., Helfand, D.J., Becker, R.H., Gregg, M.D., White, R.L. 1996 ApJ 473, 7 Folkes, S. and 24 others 1999 MNRAS, in press (astro-ph/9903456) Jackson, C.A., Londish, D.M. 1999 PASA, submitted Lewis, I.J., Glazebrook, K., Taylor, K. 1998 SPIE Proc., 3355, 828 Maddox, S.J. 1998 in “Large Scale Structure: Tracks and Traces”, proc. 12th Potsdam Cosmology Workshop, eds. Müller, V., Gottlöber, S., Mücket, J.P., Wambsganss, J., World Scientific, 91 (astro-ph/9711015) Magliocchetti, M., Maddox, S., Lahav, O., Wall, J.V. 1998 MNRAS 300, 257 Rengelink, R.B., Tang, Y., de Bruyn, A.G., Miley, G.K., Bremer, M.N., Röttgering, H., Bremer, M.A.R. 1997 A&AS, 124, 259 Rowan–Robinson, M., Benn, C.R., Lawrence, A., McMahon, R.G., Broadhurst, T.J. 1993 MNRAS, 263, 123 Smith, G., Lankshear, A. 1998 SPIE Proc., 3355, 905 Veilleux, S., Osterbrock, D.A., 1987 ApJS, 63, 295 Windhorst, R.A., Miley, G.K., Owen, F.N., Kron, R.G., Koo, D.C. 1985 ApJ, 289, 494 Wunderlich, E., Klein, U., Wielebinski, R. 1987 A&AS, 69, 487
no-problem/9909/astro-ph9909413.html
ar5iv
text
# Angular Momentum Transfer in the Binary X-ray Pulsar GX 1+4 ## 1 Introduction The binary X-ray pulsar GX 1+4 is unique in several respects. It is the only known pulsar in a symbiotic system (V2116 Oph), $`\dot{P}/P2\%`$ per year is the largest measured for any pulsar and the neutron star magnetic field is believed to be $`3\times 10^{13}`$ G - the strongest field in the known high mass or low mass X-ray binaries. The optical spectrum is that of an M giant plus a variable blue continuum and a forest of strong emission lines from H, HeI, FeII, \[FeVII\], \[OIII\] etc. The emission lines are believed to arise from photo-electric interactions of accretion disc UV photons in circumstellar matter (Davidsen, Malina & Bowyer, 1977; Chakrabarty & Roche, 1997). The blue continuum is generated by the disc. The processes of angular momentum transfer in GX 1+4 are poorly understood. A long period of high luminosity and neutron star spin-up was followed by generally lower luminosity and spin-down interrupted by short episodes of spin-up. The average rates of spin-up and spin-down are similar but no clear correlation exists between $`\dot{P}`$ and X-ray luminosity $`L_X`$. The changes are not consistent with the standard accretion theory model (Ghosh & Lamb, 1979). In this workshop we investigated recent evidence relating to transitions between spin-up and spin-down in GX 1+4. In chapter 2 we discuss evidence from X-ray and optical spectroscopy concerning the nature and distribution of the circumstellar matter in this system. In chapter 3 we report sudden changes in X-ray pulse profiles associated with changes in $`L_X`$ and probably associated with a brief transition to neutron star spin-up. In chapter 4 we compare the observed relation between $`L_X`$ and accretion torque with numerical simulations based on theoretical models of the accretion disc. ## 2 Spectroscopic evidence on the nature of the neutron star environment GX 1+4 was observed with the ASCA X-ray satellite on 1994 September 14-15 and spectroscopy of the optical counterpart, V2116 Oph, was carried out at the Anglo-Australian Telescope using the RGO spectrograph on 1994 September 25-26. Photometry during 1994 August to October, using the Mt Canopus, Tasmania and Mt John, New Zealand 1 m telescopes showed little change in the source between the times of the X-ray and optical observations. The X-ray spectroscopy showed considerable photo-electric absorption in the source region and strong iron line emisssion. The ionisation state of the iron (FeI-FeIV) shows that the $`\xi `$-parameter ($`L_X/nr^2`$, where n is the particle density of the circumstellar matter and r is the path length of the X-rays through this matter) $`30\mathrm{erg}.\mathrm{cm}.\mathrm{s}^1`$. Using the measured values of $`L_X`$ and $`N_H(nr)`$ we estimate the characteristic scale of the attenuating matter distribution $`r3\times 10^{12}`$ cm and $`n7\times 10^{10}\mathrm{cm}^3`$. The results of the optical spectroscopy were consistent with these conclusions. Using the Balmer line ratios and the calculations of Drake & Ulrich (1980) we estimate the electron density $`n_e3\times 10^{10}10^{11}\mathrm{cm}^3`$ and plasma temperature $`20,000K`$ in the emission line region. The absence of FeIII and the presence of FeII lines supports this temperature estimate. Using this information, Kotani et al (1999) propose a model in which the circumstellar matter is gravitationally bound to the neutron star during times of low $`L_X`$. This is consistent with the observed $`H_\alpha `$ line width ($``$ 2 AU) assuming doppler broadening from bound hydrogen at $`r3\times 10^{12}`$ cm. The model provides an unstable negative feedback mechanism leading to large short term fluctuations in $`L_X`$ when the system is in a low intensity state. Increased accretion raises $`L_X`$ heating the trapped matter until the thermal velocity exceeds the escape velocity driving off trapped matter and suppressing accretion from the stellar wind. This occurs only at large distances from the neutron star so there is a delay with timescale $``$ the orbital period (several months) before accretion begins to decrease. Hence the accretion rate $`\dot{M}`$ and $`L_X`$ will be unstable and variable on timescales of months but relatively stable on longer timescales while the mean $`L_X`$ is low. If the ram pressure of the M giant wind (or matter transferred by Roche Lobe overflow) becomes much higher than the thermal pressure in the trapped matter it will not be blown off by X-ray heating and the feedback mechanism will not be active. $`L_X`$ will be larger and dependent only on the rate of mass flow from the M giant. This mechanism requires very special conditions for its operation and is unlikely to apply to systems with supersonic winds as eg in Cen X-3 or Vela X-1. The Kotani et al (1999) model provides a natural explanation for some aspects of the long term behaviour of GX 1+4. Greenhill & Watson (unpublished report, 1994) collated the results of over 60 published measurements of GX 1+4 between 1971 and 1994. Fig. 1 is their estimate of the time dependence of the 20 keV X-ray flux during this period. Throughout the 1970’s $`L_X`$ was large and relatively stable as expected when the feedback mechanism is not active. Subsequently the source was highly variable on timescales of order months and the mean value of $`L_X`$ was much lower. We suggest that the feedback mechanism was active during this period. Another prediction is that large X-ray flares will be of shorter duration than smaller flares. Large flares will blow off matter closer to the neutron star and hence more rapidly affect accretion onto it. The pulsed flux history reported by Chakrabarty et al (1997) is qualitatively consistent with this prediction. The model does not provide an explanation for the transition between intensity states. This may be caused by some long term instability in the giant companion. Nor does it make any prediction concerning the direction of angular momentum transfer in this system. We note however that the negative feedback regime with a large diameter shell of low velocity trapped matter may be more conducive to the formation of a contra-rotating disc than the high luminosity regime when the ram pressure of the wind from the giant is higher and the wind extends much closer to the surface of the neutron star. ## 3 Spectral characteristics of GX 1+4 throughout a low flux episode GX 1+4 was observed using the Rossi X-ray Timing Explorer (RXTE) satellite (Giles et al. 1995) over 1996 July 19-21 during a period of unusually low X-ray brightness for the source. For a detailed report see Galloway et al (1999) and Giles et al (1999). The countrate from the Proportional Counter Array (PCA) aboard RXTE indicates that the mean flux decreased smoothly from an initial level of $`6\times 10^{36}\mathrm{erg}\mathrm{s}^1`$ to a minimum of $`4\times 10^{35}\mathrm{erg}\mathrm{s}^1`$ (20-60 keV, assuming a source distance of 10 kpc) before partially recovering towards the initial level at the end of the observation. The pulse profiles (folded at the best-fit constant period) and the mean photon spectra before and after the flux minimum show significant variation. The observation is divided up into three distinct intervals based on the mean flux. Interval 1 includes the start of the observation to just before the flux minimum. Interval 2 spans $`6`$ hours including the flux minimum, while during interval 3 the mean flux is rising steadily towards the end of the observation. The pulse profile is asymmetric and characterised by a narrow, deep primary minimum (Fig. 2). During interval 1, the flux reaches a maximum closely following the primary minimum; this is referred to as a ‘leading-edge bright’ profile. Pulsations all but cease during interval 2, and in interval 3 the asymmetry is reversed, with the flux reaching a maximum just before the primary minimum (‘trailing-edge bright’ profile). This is the first observation of such dramatic pulse profile variations over timescales of $`<1`$ day. Leading-edge bright profiles are generally associated with phases of spin-down in GX 1+4, while trailing-edge bright profiles are mostly observed during phases of spin-up (Greenhill, Galloway & Storey, 1998). Re-analysed data from the regular monitoring of the source by the Burst and Transient Source Experiment (BATSE) aboard the Compton Gamma-Ray Observatory (CGRO) indicate that the source switched from spin-down to spin-up $`12`$ days after the RXTE observation. This suggests that the mechanism for the pulse profile variations may be related to that causing the poorly-understood spin period evolution in this source. The best fitting spectral model (Galloway et al, 1999) is based on the work of Titarchuk (1994). The principal component is generated by Comptonisation of a thermal input spectrum at $`1`$ keV by hot ($`kT8`$ keV) plasma close to the source with scattering optical depth $`\tau _P3`$. Additional components include a gaussian to fit Fe K$`\alpha `$ line emission from the source and a multiplicative component representing photoelectric absorption by cold material in the line-of-sight. Variations in the mean spectrum over the course of the observation are associated with a dramatic increase in the column density $`n_H`$ from $`13\times 10^{22}`$ to $`28\times 10^{22}\mathrm{cm}^2`$ between intervals 1 and 3, and also with significant energy-independent variations in the flux. Similar spectral variations were seen in 4U 1626-67 before and after the spin rate transition in that source (Yi & Vishniac, 1999). This strengthens the argument that the pulse profile and spectral changes reported here were associated with the torque reversal in GX1+4 reported by Giles et al. (1999). Pulse-phase spectral fitting indicates that variations in flux with phase can be accounted for by changes in the Comptonised model component, with in particular variations in the fitted optical depth $`\tau _P`$ and the component normalisation $`A_\mathrm{C}`$ accounting for the phase dependence. The spectral fits suggest that the soft input photons originate from the neutron star poles, and are subsequently Comptonised by matter in the accretion columns. The sharp dip in the pulse profiles is then tentatively identified with the closest passage of one of the magnetic axes to the line of sight. More details of the spectral analysis can be found in Galloway et al. (1999). ## 4 SPH modelling of a counter rotating disc The new continuous monitoring data obtained by BATSE (Bildsten et al. 1997, Nelson et al. 1997, Chakrabarty et al. 1997) have yielded values for the mass accretion rate (X-ray luminosity) and the accretion torque (change in spin frequency) on a regular basis for a large number of X-ray pulsars. This data set has allowed a detailed comparison of the observed relation between accretion rate and torque, and that predicted by theoretical models. The observations are not consistent with the standard Ghosh & Lamb (1979) model, since this predicts a clear correlation between spin-up and an increase in X-ray luminosity, whereas the observations show a variety of behaviour, with spin-up or spin-down occurring at the same apparent luminosity. Numerical simulations (see Ruffert, 1997 and references therein) have shown that it is possible to form temporary accretion discs with alternating senses of rotation in wind-accreting systems. Nelson et al. (1997) made the ad hoc suggestion that many observational features of some systems that are normally thought to contain discs (GX 1+4, 4U 1626-67) would be explained if they were accreting from discs with alternately prograde and retrograde senses of rotation. Previously, Makishima et al. (1988), Dotani et al. (1989) and Greenhill et al. (1993) had also sought to explain the rapid spin-down of GX 1+4 in terms of accretion from a retrograde disc. If the secondary star is feeding the accretion disc via Roche lobe overflow, as is almost certainly the case in 4U 1626-67, it is hard to conceive how a retrograde disc could ever come about. However, in the case of GX 1+4, the suggestion is not unreasonable. This X-ray pulsar is unique in the sense that it is accreting from a red giant or AGB star wind (Chakrabarty & Roche 1997), and is in a very wide orbit. Estimating the timescale of disc reversal for accretion from such a wind, one obtains a timescale of the order of years, and the disc would form at a large radius ($`10^{13}`$ cm) so that the inner part of the accretion flow is expected to be like a normal accretion disc. A timescale of years corresponds well with the timescale on which the accretion behaviour in GX 1+4 is observed to change, with a negative correlation between accretion rate and spin-up in some phases while the disc would be retrograde, and a positive one at other times when it is prograde (Chakrabarty et al. 1997). Thus, this system is ideally suited to study the possibility of forming retrograde discs, since the timescale for disc reversal would be much longer than that of the torque fluctuations on a timescale of one day or less that are common in all types of X-ray pulsars. In the systems that accrete from a fast wind, the two timescales are comparable, and the effects will be difficult to separate. Two dimensional smoothed particle hydrodynamics (SPH) simulations were used to investigate the interactions of an existing accretion disc with material coming in with opposite angular momentum. See Murray, de Kool & Li, 1999 for more details of the calculations. Ideally we should like to simulate the entire accretion disc. However, for GX 1+4, this would require resolution over several decades in radius. Instead we completed two separate simulations: the first being of the inner, viscously dominated region which for GX 1+4 we expect to extend from the neutron star magnetosphere out to a radius $`r5\times 10^{10}`$ cm; and the second being of the outer disc in which the dynamical mixing of material with opposite angular momentum dominates. We found that in the inner disc, once the sense of rotation of inflowing material was reversed (figure 4), the existing disc was rapidly driven inside the circularisation radius of the new counter-rotating matter. Further evolution occurred on the viscous time scale, with the initial disc slowly being accreted at the same time as a second counter-rotating disc formed outside it. We found that the rate of angular-momentum accretion (i.e. the material torque, shown in figure 5) was proportional to the mass accretion rate. The material torque did not change sign until the initial disc had been entirely consumed. The change in sign of the torque was accompanied by a minimum in the accretion luminosity. The second calculation (figure 4) began with two counter-rotating rings with Gaussian density profiles. New material, with the same sense of rotation as the inner ring, was then added at a constant rate at the outer boundary. As with the first calculation we found the initial rings were rapidly driven in until they lay within the circularisation radius of the newly added material. We had anticipated a catastrophic cancellation of angular momentum followed by radial inflow once the rings interacted. This did not happen. Instead the rings remained cohesive with a well defined gap between them. The newly added material then formed a third, outer ring. We concluded that, if the external mass reversal timescale is significantly shorter than the viscous timescale at the circularisation radius, a number of concentric rings with alternating senses of rotation could be present between the circularisation radius and the radius at which the viscous timescale is comparable to the reversal timescale. These simulations neglected three-dimensional effects, and didn’t account for the unstable wind feeding non-coplanar material onto the system. However we are confident that further work will not alter our main conclusion that changes at the inner boundary of the disc occur on the same timescale as that imposed at the outer boundary. Furthermore we find that material torque reversals occurring as a result of a disc reversal, would do so during an accretion luminosity minimum. ## 5 Discussion & Conclusions Optical and X-ray spectroscopy indicate that a circumstellar cloud or thick disc extends to at least $`3\times 10^{12}`$ cm from the neutron star. This has a mean density $`7\times 10^{10}\mathrm{cm}^3`$ and temperature $`20,000`$ K in the emission line region. We describe a model, proposed by Kotani et al (1999), in which, during the low intensity state, accretion is controlled by X-ray heating leading to an unstable negative feedback mechanism. This maintains $`L_X`$ at relatively low levels but highly variable on timescales of $``$ months. Physical conditions in the trapped matter region may be more conducive to formation of a contra-rotating disc and neutron star spin-down when this feedback mechanism is active. When the accretion rate is high, the mechanism fails and $`L_X`$ is higher and more stable. The transition between states is presumably controlled by some instability in the giant companion. The X-ray pulse profiles from GX 1+4 changed remarkably during an observation by the RXTE satellite over 1996 July 19-21. The profiles were asymmetric and ’leading edge bright’ during the early part of the observations when $`L_X`$ (20-60 keV) was $`6\times 10^{36}\mathrm{ergs}^1`$ (source distance 10 kpc). After an interval of $`6`$ hr. when $`L_X`$ was $`10`$ times lower, the intensity increased towards the initial level but the profiles had changed to ’trailing edge bright’. The change in profile may be related to a transition from spin-down to spin-up which was detected by the BATSE experiment on CGRO at about the same time. According to Greenhill et al (1998) leading edge bright/trailing edge bright profiles are normally associated with neutron star spin-down/spin-up repectively. The X-ray spectrum during the RXTE observations was best characterised by Comptonised thermal emission with iron line emission and photo-electric absorption by cold matter in the source region. The column density $`n_H`$ doubled between the the early and late phases of the observation and showed significant variation on timescales as short as 2 h. This change is too short to be associated with the feedback mechanism discussed above. The extra absorbing matter in the line of sight must be situated much closer to the neutron star. Two dimensional SPH simulations have been used to investigate the interactions of an existing accretion disc with incoming matter having opposite angular momentum. The simulations showed that a counter-rotating disc was formed outside the existing disc which quickly shrunk inside the circularisation radius of the outer disc. The inner disc was accreted on the viscous timescale. The torque did not change until this disc was fully consumed and torque reversal was accompanied by a minimum in $`L_X`$. If the external mass reversal timescale is significantly shorter than the viscous timescale at the circularisation radius, a number of concentric rings with alternating senses of rotation can co-exist. Changes at the inner boundary of the disc occur at the same timescale as that imposed at the outer boundary. Material torque reversals occur at a minimum in $`L_X`$. The net torque on the neutron star depends also on magnetic torques due to linkage with disc matter both inside and outside the co-rotation radius (Ghosh & Lamb, 1979, Li & Wickramasinghe, 1997). The two transitions to spin-up reported by Chakrabarty et al (1997) occurred when $`L_X`$ was increasing by more than an order of magnitude from very low levels. Conversely the transition to spin-down was associated with a similar magnitude decrease to a very low level. Such transitions could be caused by a disc having alternate zones with prograde and retrograde motion. The BATSE record (Chakrabarty et al, 1997) shows that, during intervals of monotonic spin-up or monotonic spin-down, GX 1+4 and several other wind fed sources make step like transitions from one value of $`\dot{P}`$ to another. Hence, if the disc velocity profile has abrupt changes switching the sense of rotation between different zones, as discussed in section 4, step like changes in the magnitude of $`\dot{P}`$ will occur as the matter is transported inwards. The analysis by Wang & Welter (1981) indicates that asymmetry in pulse profiles may be a consequence of an asymmetry in the accretion flow onto the polar cap region. Hence, the reversal in the asymmetry of the pulse profiles observed in the RXTE data could be a consequence of accretion flow changes as the direction of rotation of the inner edge of the disc reversed. This occurred at a minimum in $`L_X`$ as predicted by the SPH modelling. ## Acknowledgements We acknowledge Martijn de Kool and Jianke Li for substantial contributions to the work described in section 4. Barry Giles did much of the work on the XTE X-ray data. Michelle Storey and Kinwah Wu helped organise the workshop sessions on GX 1+4. We are grateful to the Astrophysical Theory Centre and to Martijn (again) for organising a very successful workshop. ## References Bildsten, L., et al. 1997, ApJS, 113, 367 Chakrabarty, D., Bildsten, L., Grunsfeld, J.M., Koh, D.T., Nelson, R.W., Prince, T.A. & Vaughan, B.A.. 1997, ApJ, 481, L101 Chakrabarty, D. & Roche, P. 1997, ApJ, 489, 254 Davidsen, A., Malina, R. & Bowyer, S. 1977, ApJ, 211, 866 Dotani, T., Kii, T., Nagase, F., Makishima, K., Ohashi, T., Sahao, T., Koyama, K.& Tuohy, I.R. 1989, PASJ, 41, 427 Drake, S.A.& Ulrich, R.K. 1980, ApJS, 42, 351 Galloway, D.K., Giles, A.B., Greenhill, J.G. & Storey, M.C., 1999, submitted to MNRAS Ghosh, P.& Lamb, F.K. 1979, ApJ, 234, 296 Giles, A.B., Jahoda, K., Swank, J.H.& Zhang W. 1995, PASA, 12, 219 Giles, A.B., Galloway, D.K., Greenhill J.G., Storey M.C. & Wilson, C.A. 1999, submitted to ApJ Greenhill, J.G., et al. 1993, MNRAS, 260, 21 Greenhill, J.G., Galloway, D.K. & Storey M.C. 1998, PASA, 15, 2, 254 Kotani, T., Dotani, T., Nagase, F., Greenhill, J., Pravdo, S.H. & Angelini, L. 1999, ApJ, 510, 369 Li, J. & Wickramasinghe, D.T. 1997, MNRAS, 286, L25 Makishima, K., et al. 1988, Nature, 333, 746 Murray, J.R., De Kool, M. & Li, J. 1999, ApJ, 515, 738 Nelson, R.W., et al. 1997, ApJ, 488, L117 Ruffert, M. 1997, A&A, 317, 793 Titarchuk, L. 1994, ApJ, 434, 570 Wang, Y.M. & Welter, G.L. 1981, A&A, 102, 97 Yi, I. & Vishniac, E. 1999, ApJ, 516, L87
no-problem/9909/astro-ph9909042.html
ar5iv
text
# 1 Introduction ## 1 Introduction Clusters of galaxies were first detected as large concentrations of galaxies. With the advent of X-ray astronomy also X-ray emission from galaxy clusters was found. This emission could be explained as thermal bremsstrahlung from hot gas filling the whole potential well of the cluster (e.g. Sarazin 1986). Moreover, in many clusters radio emission was found. This radio emission is synchrotron emission from relativistic particles. Table 1 summarises the main cluster components together with their mass fractions. While galaxies contribute only $`35\%`$ to the cluster mass, the contribution of the gas is considerably more ($`1030`$%; see e.g. Arnaud & Evrard 1999; Ettori & Fabian 1999). But, as these two contributions are by far not 100%, it is concluded that most of the mass is in form of dark matter, i.e. not directly observable. Therefore mass determinations, which are indirect measurements of the dark matter, are so particularly interesting in clusters. The list of components summarises only the main cluster components and main frequencies, at which clusters are observed. Of course this list is not complete, and clusters are also observable at other frequencies, e.g. non-thermal, hard X-ray emission was detected in some clusters (see Fusco-Femiano et al. 1999). In the following several fields of cluster research are presented. Also this compilation cannot be complete with only limited space available. Only a few topics could be selected and handled very briefly. Preference was given to currently very active fields which combine observations in two or more wavelengths. Tab. 1 - Main cluster components ## 2 Optical observations Historically, optical observations were the first cluster observations. Already from the first cluster catalogues (e.g. Abell 1958) a rough estimate of the richness and the morphology of cluster could be obtained. With the additional information of the velocity of the galaxies three aspects can be addressed: (1) the redshift measurement yields the three-dimensional distribution of clusters, which is very important for cosmological studies, i.e. the determination of distribution functions. (2) The distribution of velocities within a cluster gives information about the internal dynamics, i.e. the collision of two subclusters can show up as a broadened velocity distribution (e.g. Binggeli et al. 1993). (3) With the assumption of virial equilibrium the velocity dispersion (= the standard deviation of the distribution) yields a measure for the total cluster mass (see e.g. Carlberg et al. 1996; Girardi et al. 1998). In this way Zwicky found already in 1933 that not all the cluster mass can be contained in the galaxies. Detailed spectroscopic and morphological studies of distant cluster galaxies provides interesting information on cluster formation and evolution (e.g. Stanford et al. 1998, van Dokkum et al. 1998). The morphological types of galaxies are not distributed in uniformly, but the higher the galaxy density the larger is the fraction of ellipticals (Dressler 1980). This holds not only for a comparison of the field galaxies with cluster galaxies, but also for cluster galaxies at different distances to the cluster centre. This effect can be well seen e.g. in the Virgo cluster (Binggeli et al. 1987; Schindler et al. 1999; see Fig. 1). The explanation for this density-morphology relation is interaction between galaxies and interaction of galaxies with the intra-cluster gas. A beautiful example of the latter can be seen again in the Virgo cluster in HI: two galaxies are stripped off their cool gas when approaching the cluster centre (Cayatte et al. 1990). A currently very active field is gravitational lensing. Since the first arcs in clusters were discovered more than 10 years ago (Lynds & Petrosian 1986; Soucail 1987) gravitational lensing was used as a way to determine cluster masses. Two methods to determine the mass can be distinguished: strong lensing and weak lensing. Strong lensing uses the giant arcs which are distorted images of background galaxies, see e.g. the beautiful HST images of Cl0024+1654 (Colley et al. 1996) and A2218 (Kneib et al. 1996). With this method only the mass contained in a volume within the arc radius can be measured, i.e. it is restricted to the central part of the cluster. Weak lensing on the other hand uses the systematic elongation of all background galaxies. With a few mathematical operations – a method pioneered by Kaiser & Squires (1993) – this can be directly transformed into a mass distribution without treating different subclusters or single galaxies separately. The difficulties for this method are the distinction of background galaxies and the normalisation of the mass, because the mass distribution cannot be measured out to border of the cluster due to the limited CCD sizes. An example for a weak lensing analysis for the cluster A2218 and a comparison of the different mass determination methods can be seen in Squires et al. (1996). For a recent reviews on lensing in clusters see Hattori et al. (1999) and for lensing in general see Wambsganss (1998). ## 3 X-ray observations The gas between the galaxies is so hot, that it is emitting thermal bremsstrahlung in X-rays. This hot gas is filling the whole cluster potential and is therefore a good tracer for deep potential wells. As the thermal bremsstrahlung is proportional to the square of the gas density, X-ray selected clusters are much less affected by projection effects than optically selected clusters. Furthermore, the morphology of a cluster can be seen much better in X-rays, even for distant clusters. Two examples of clusters with very different morphologies (see Fig. 2) at similar redshifts ($`z0.4`$) are Cl0939+4713 – a cluster with two subclumps each of them showing even some internal structure (Schindler et al. 1998) – and RXJ1347-1145 – the most X-ray luminous cluster found so far with a very centrally concentrated X-ray emission (Schindler et al. 1997). The morphologies are important for cosmology. In a low $`\mathrm{\Omega }`$ universe the merging processes should stop earlier so that more clusters with virialised X-ray emission are expected. Therefore the fraction of virialised clusters at a certain redshift can be used to constrain the mean density of the universe $`\mathrm{\Omega }`$ (for theory see Richstone et al. 1992; for an application to observations see Mohr et al. 1995). While recent morphological studies were carried out mainly with ROSAT observations, the Japanese satellite ASCA was much used because of its spectral capabilities. An important parameter determined by X-ray spectra is the gas temperature. The temperature is typically between 1 and 10 keV being in good agreement with the depth of the potential well. Temperature maps, which are still very coarse due to the limited spatial resolution of ASCA, show that many clusters are not isothermal (Markevitch 1998). The temperature distribution is another way to determine the dynamical state of a cluster. As it was shown by hydrodynamic simulations (Schindler & Müller 1993) the temperature structure shows very clearly the different stages of a merger, e.g. the hot, compressed gas between two subclusters shortly before they collide or the shock waves emerging after the collision as steep temperature gradients. As not all the ions are completely ionised at these temperatures line emission can be observed. In most of the clusters Fe lines are visible, but sometimes also Si and other elements are detectable. The metallicities (calculated from the Fe lines) are typically in the range between 0.2 and 0.5 in solar units (e.g. Mushotzky & Loewenstein 1997; Tsuru et al. 1997; Fukazawa et al. 1998). Obviously, the gas cannot be purely of primordial origin but must have been enriched by nucleosynthesis processes in cluster galaxies. No (strong) evolution of the metallicities with time has been found out to a redshift of 1 (Schindler 1999). This result is in agreement with optical observations of cluster galaxy evolution (e.g. Stanford et al. 1998) and theoretical models (Martinelli et al. 1999), which both find no metal enrichment of the intra-cluster medium for redshifts smaller than 1, i.e. the enrichment must take place relatively early. A famous example for a high-redshift and high-metallicity cluster is the cluster AXJ2019+112 at a redshift of $`z=1`$, which has a puzzling iron abundance of more than the solar value (Hattori et al. 1997). X-ray observations provide another method the measure the cluster mass. With the assumption of hydrostatic equilibrium the mass can be estimated simply from the temperature profile and the gas density profile (for applications of this method on a large sample of clusters see e.g. Arnaud & Evrard 1999; Ettori & Fabian 1999). Numerical simulations showed that hydrostatic equilibrium is a good assumption to get reasonable mass estimates in roughly relaxed clusters (Evrard et al. 1996; Schindler 1996). Masses for typical clusters are of the order of $`10^{15}M_{}`$ when measured out to the virial radius. A comparison of the masses determined by the different methods shows that the virial mass and the X-ray mass are in general in agreement, but the lensing mass (in particular the mass from strong lensing) is sometimes up to a factor of three higher (see e.g. the comparison in Squires et al. 1996 or Schindler et al. 1997). In these comparisons it is important to take into account that the mass is measured in different volumes: in X-rays the mass is measured in a spherical volume, while the gravitational lensing effect is sensitive to all the mass along the line-of-sight, i.e. it measures the mass in a cylindrical volume around the cluster. Therefore, the measurements can give different results, although they are all correct. As the measurements for all the methods mentioned here are getting better and the problems of the individual methods (like e.g. projection effects) are better understood, it is probable that all the methods are going to converge in the end. From the X-ray observations also the mass of the intra-cluster gas can be determined. Together with the mass in the galaxies this yields an average baryon fraction of about 20%. For an $`\mathrm{\Omega }=1`$ universe this value is at least 3 times larger than allowed by primordial nucleosynthesis – a famous discrepancy termed “baryon catastrophe” (White et al. 1993). The easiest way out of the problem seems now a cosmological model with a low $`\mathrm{\Omega }`$. A comparison of the gas distribution and the dark matter distribution shows that the gas distribution is generally more extended (e.g. David et al. 1995). Obviously, cluster evolution is not completely a self-similar process, but physical processes taking place in the gas must be taken into account, like e.g. energy input by supernovae, galactic winds or ram-pressure stripping (see e.g. Metzler & Evrard 1997; Cavaliere et al. 1998a). As the gas distribution is relatively more extended in less massive clusters (see Schindler 1999) these heating processes must be more efficient in less massive clusters. Clusters as the largest bound objects in the universe are very good tracers for large-scale structure. They can be used for various cosmological tests. Distribution functions like the luminosity function (e.g. De Grandi et al. 1999) or the correlation function (e.g. Guzzo et al. 1999) preferably of an X-ray selected cluster sample (i.e. a mass selected sample) can be used to constrain cosmological parameters. Also correlations between X-ray quantities (e.g. between the X-ray luminosity, the temperature and the cluster mass) can be used to test different cosmological models because their relations as well as the evolution of these relations depend on cosmological parameters (Oukbir & Blanchard 1992; Bower 1997; Cavaliere et al. 1998b, Eke et al. 1998; Schindler 1999). ## 4 Radio observations Radio emission has been found in many galaxies clusters. Two different kinds of radio emission can be distinguished: diffuse emission and emission associated with galaxies. The latter one (see Owen & Ledlow 1997 for many examples) can be used to determine the relative motion of the intra-cluster gas and head-tail galaxies by their radio morphology (O’Donoghue et al. 1993; Sijbring & de Bruyn 1998). Furthermore, the pressure of the intra-cluster gas can be estimated from the radio lobe expansion into the gas (e.g. Eilek et al. 1984; Feretti et al. 1990). Observations of the rotation measure of sources in or behind a cluster provide the possibility to determine the cluster magnetic field. Typically values between 0.1 $`\mu `$G up to few $`\mu `$G are found (e.g. Feretti et al. 1995). In several clusters diffuse radio emission could be detected (e.g. Giovannini et al. 1999). If the diffuse emission is located in the central parts and has a roughly spherical shape, it is called radio halo, see e.g. the Coma cluster (Giovannini et al. 1993). In other clusters the radio emission is situated in the outer parts and has usually elongated shapes. These sources are called relics, see e.g. A3667 (Röttgering et al. 1997). Although it was previously assumed that the central and non-central sources are different kinds of sources, it is now probable that they have the same origin. A possible explanation for the emission could be merging of subclusters. In such mergers turbulence and shocks are produced which can provide the necessary energy to reaccelerate particles and to amplify the magnetic field. The discovered correlations between the halo size, the radio power, the X-ray luminosity and the gas temperature of the host cluster support this theory: The collision of more massive clusters (= higher X-ray luminosity and higher gas temperature) would provide more energy for the radio halo. Finally, a very exciting field, which uses a combination of radio and X-ray observations, is the distance determination by the Sunyaev-Zel’dovich effect (Sunyaev & Zel’dovich 1972). When photons of the cosmic microwave background pass through the hot gas of a cluster, they are scattered to slightly higher energies, i.e. inverse Compton scattering. That means the blackbody spectrum of the CMB appears slightly shifted when observed in direction of a cluster. This results in an increment or a decrement depending on what side of the blackbody spectrum the observations are done. This in- or decrement is proportional to the intra-cluster gas density, while the X-ray emission is proportional to the square of the density. These two different dependences allow to estimate the physical size of the cluster, while the angular size of the cluster can be measured easily. The combination of physical and angular size provides a direct measurement of the distance of the cluster. The problem of this method is that the physical size is measured along the line-of-sight, while the angular size is measured perpendicular to the line-of-sight, i.e. if the cluster is elongated the derived distance is wrong. To avoid this problem currently whole samples of clusters are measured (e.g. Carlstrom et al. 1999), because for many clusters this effect is expected to average out. Furthermore, the Sunyaev-Zel’dovich effect can also be used to study the distribution of the intra-cluster gas. For a recent review on the Sunyaev-Zel’dovich effect see Birkinshaw (1999). ## 5 Conclusions The study of clusters of galaxies became a very active field in recent years through the development of new techniques (e.g. gravitational lensing) and powerful instruments (e.g. ROSAT, HST). Observations in all wavelengths and the comparison with theory taught us a lot about cluster components, cluster dynamics and the physics in clusters. A particularly interesting aspect has opened up in the last years with the use of clusters as probes for cosmology. The new instruments, e.g. VLT, XMM, CHANDRA and PLANCK, will certainly make cosmology with clusters an even more fascinating field in the future. ## Acknowledgements I acknowledge gratefully the hospitality of the Institut d’Estudis Espacials de Catalunya in Barcelona where these proceedings were written. During the stay there I was supported by the TMR grant ERB-FMGE CT95 0062 by CESCA-CEPBA.
no-problem/9909/hep-ph9909403.html
ar5iv
text
# 1 Introduction ## 1 Introduction The origin of the spin in the proton is still a subject of much debate. Over the years it has been confirmed that the quarks, as measured in deep inelastic scattering, account for only 30% of the proton spin. Next-to-leading order (NLO) QCD fits of structure function and semi-inclusive data suggest that the contribution of the gluon to the spin could be large . A first attempt of a direct measurement of the polarised gluon distribution, $`\mathrm{\Delta }G`$, using leading charged particles is not in conflict with this suggestion. In general, it has been concluded that major progress in our understanding of the spin structure can be made with clear and unambiguous direct measurements of $`\mathrm{\Delta }G`$. Several experiments are planned to tackle this measurement . It has been shown that the $`ep`$ collider HERA, when both beams are polarised, could make an important contribution to the determination of $`\mathrm{\Delta }G(x)`$, for a considerable $`x`$-range, where $`x`$ is the momentum fraction of the proton carried by the gluon. A particularly sensitive method is to extract $`\mathrm{\Delta }G(x)`$ from di-jet events in deep inelastic scattering. Feasibility studies of extracting the polarised gluon density $`\mathrm{\Delta }G(x)`$ from di-jet events at HERA in leading order (LO) have been performed , the most detailed one was published in the proceedings of the workshop ’Future Physics with polarized beams at HERA’ . The event generator PEPSI , which includes hadronization of the final parton state, was used, followed by a simple detector simulation. PEPSI includes LO matrix elements for the QCD processes in the hadronic final state. A first estimate of higher order effects was obtained by including initial and final state unpolarised parton showers. Recently next-to-leading order polarised cross sections for di-jet production became available with the program MEPJET . This allows to check NLO corrections to the LO cross section asymmetries. In this paper MEPJET will be used to find optimal event and di-jet selection cuts, which are a balance between statistical significance and analysing power of the asymmetries. Note that at LO the momentum fraction of the proton carried by the gluon, $`x_g`$, can be directly calculated from the di-jet kinematics (see Fig. 1). We define $$x_{jets}=x(1+\frac{\widehat{s}}{Q^2}),$$ calculated from the Bjorken-$`x`$, the four momentum transfer $`Q^2`$ and the invariant mass $`\widehat{s}`$ of the di-jet system. At LO the variable $`x_{jets}`$ is identical to $`x_g`$, while at NLO $`x_{jets}`$ is different from $`x_g`$ even at parton level e.g. due to gluon radiation processes, see Fig. 1. In fact the relation $`x_gx_{jets}`$ holds. Since the main purpose of this paper is to show the size of the asymmetries in NLO and the range in $`x_g`$ where one can expect information on $`\mathrm{\Delta }G`$, a simple method to relate the reconstructed $`x_{jets}`$ variable from the jets to the true one will be used. The same technique to unfold the gluon from the data as done at LO will be used. In the last chapter we show the potential of a future high energy $`ep`$ collider, to extract $`\mathrm{\Delta }G`$ in LO from di-jet events, using TESLA and HERA as an example. ## 2 Di-jet selection using MEPJET We start with kinematic cuts close to the ones used in the published LO study : $`5<`$ $`Q^2`$ $`<100\mathrm{GeV}^2`$ (1) $`y`$ $`>0.3`$ (2) $`E_{electron}`$ $`>5\mathrm{GeV}`$ (3) $`p_t^{jets}`$ $`>5\mathrm{GeV}`$ (4) Compared to , for the scattered electron and jet selection in this analysis, a pseudorapidity cut of $`|\eta |<3.5`$ is used. An additional cut on the $`p_t^{jets}>5`$ GeV in the Breit frame is also applied. As before the cone jet algorithm was used with a cone size of 1 in azimuthal angle and pseudorapidity. For the calculations we used the structure function parametrisations of GRV for the unpolarised case, and Gehrmann-Stirling set A for the polarised case. The cut on the square of the invariant mass of the two jets was varied in the range, $`\widehat{s}:=Q^2(x_{jets}/x1)`$, as $$\widehat{s}>100,200,300,500\mathrm{GeV}^2,$$ where $`\widehat{s}`$ is computed from the reconstructed jet quantities, and the variables are indicated in Fig. 1. Further cuts which were varied in the study are: $$p_t^{jets}>5,7\mathrm{GeV},$$ $$Q^2>2,5,10,40\mathrm{GeV}^2,$$ $$y>0.15,0.3,$$ and finally the electron energy cut was lowered to $`E_e=3\mathrm{GeV}^2`$. The results for LO and NLO, polarised and unpolarised cross section for exclusive di-jet production for several combinations of cuts, can be seen in Table 1. In the last column of this table the ratio between the expected overall asymmetry ($`\mathrm{\Delta }\sigma /\sigma `$) and a quantity which is proportional to the expected statistical error $`(1/\sqrt{\sigma })`$ has been computed, and hence gives an idea of the sensitivity: a higher value means higher sensitivity. One observes that for all tried scenarios the sensitivity decreases from LO to NLO. Furthermore globally the corrections to the unpolarised cross sections become large for a high $`\widehat{s}`$ cut, while the polarised cross section receives high corrections when a low $`\widehat{s}`$ cut is used. For the final study we selected those scenarios where the correction to the polarised and unpolarised cross sections are less than 30% and a good compromise between the analysing power and statistics is found. This can be obtained by choosing $`\widehat{s}>300`$ $`\mathrm{GeV}^2`$ or $`\widehat{s}>200`$ $`\mathrm{GeV}^2p_t>7`$ GeV, both of which have a large sensitivity to the gluon according to Table 1. The asymmetries as a function of $`x_{jets}`$ for these scenarios in LO and NLO are compared in Fig. 2. The values are reduced in NLO compared to LO, but still sufficiently large. The errors correspond to the statistical errors expected for $`200\mathrm{pb}^1`$ (but assuming a 100% polarisation of the colliding beams). Table 1 also shows that lowering the electron energy requirement does not bring a significant improvement, despite the region of larger depolarisation factor included. We will therefore use the scenario of $`\widehat{s}>200`$ $`\mathrm{GeV}^2p_t>7`$ GeV and $`E_e>5`$ GeV in the following. Before moving towards extracting the gluon distribution we first compare di-jet asymmetries in LO predicted by MEPJET with PEPSI without parton showers, and find them to agree very well. This has been shown before , and has been confirmed here. In Table 2 the overall asymmetries are shown using different $`\widehat{s}`$ cuts and event/jet selection cuts as in . Using the standard cuts, as derived in this paper, i.e. $`\widehat{s}_{min}>200`$ GeV and $`p_t^{jets}>7`$ GeV, the NLO and LO cross sections of MEPJET and PEPSI are compared in Table 3. For the higher order calculations, the largest difference is observed for the unpolarised cross sections (PEPSI/PS versus MEPJET/NLO) while the polarised cross sections are very similar. ## 3 Correlation of ’true’ and ’visible’ variables A problem of MEPJET is that the ’true’ $`x_g`$, i.e. the $`x`$ value of the gluon probed in the proton, is not known anymore at running time, but only the $`x`$ value reconstructed from the mass of the two jets $`s_{ij}`$ : $`x_{jets}`$. In contrast to the LO case, in NLO the information of the di-jets alone is not sufficient to reconstruct directly $`x_g`$ event by event, but needs to be ’unfolded’. To be sensitive to $`\mathrm{\Delta }G`$ by measuring di-jet asymmetries only there has to be a good correlation between $`x_g`$ and $`x_{jets}`$. This correlation has been checked using the program DISENT . A correlation matrix has been produced, which in a second step can be used in MEPJET to reconstruct $`x_g`$ from $`x_{jets}`$. DISENT contains only unpolarised di-jet cross sections therefore we could not perform the whole study with this program. For the simulation we used the same conditions as in MEPJET, concerning jet algorithm, parton distributions and cuts. Fig. 3 (right) shows the correlation between $`x_g`$ on the $`x`$ axis and $`x_{jets}`$ on $`y`$-axis. The correlation looks promising. This correlation matrix was then applied in the MEPJET program: for each ’event’ the $`x_g`$ was determined from the $`x_{jets}`$ randomly according to the probabilities of this matrix. Fig. 3 (left) shows the polarised cross sections for gluon induced processes as a function of $`x_{jets}`$ and the so-determined $`x_g`$. We see a shift to higher $`x`$ as expected, but the corrections are not very large. Figure 4 shows that also in NLO the polarised cross-section for di-jet production is dominated by gluon initiated processes. The corresponding asymmetries for events due to quark and gluon initiated processes, which can not be distinguished on an event by event basis, are shown in Fig. 5 for a luminosity of 200 pb<sup>-1</sup>. The figure also shows the asymmetries when calculated versus the ’measured’ $`x`$ ($`x_{jets}`$) or the ’true’ $`x`$ ($`x_{true}x_g`$) when using the unfolding matrix from DISENT. It demonstrates that the effects are small due to the locality of the $`x_{jets}x_g`$ correlation. A comparison of the asymmetries expected in LO and in NLO, for the latter shown both as a function of $`x_g`$ and $`x_{jets}`$, is given in Fig. 6. The reduction of the asymmetry from LO to NLO is clearly visible. ## 4 Sensitivity to $`x\mathrm{\Delta }G(x)`$ In this section we will quantify the sensitivity to the shape of $`x\mathrm{\Delta }G(x)`$, following the method used in . In a real measurement one could obtain $`x\mathrm{\Delta }G(x)`$ from the measured asymmetry by an unfolding method, where the background would be subtracted statistically and correlations between bins are fully taken into account. The H1 experiment has already shown a NLO extraction of the gluon by using combined information of the total inclusive and di-jet cross section . If correlations between bins are small one can use a simpler method performing a bin-by-bin correction. For our study we consider the latter method to be sufficient. Taking the NLO GS-A gluon distribution as a reference, we calculated the statistical errors of $`x\mathrm{\Delta }G(x)`$ in the range $`0.005<x<0.4`$ where a significant measurement can be made. Note that this range is shifted slightly to higher $`x`$ values compared to the LO study since $`x_g>x_{jets}`$. Also shown is the expectation for the NLO GS-C distribution. The results are shown for two values of the integrated luminosity and taking the polarisation for both beams to be 0.7. Clearly, even for a luminosity of only 200 pb<sup>-1</sup> already a clear difference between the two gluon scenarios is expected. ## 5 HERA-TESLA Finally the di-jet asymmetry was calculated for a possible future high energy $`ep`$ collider, consisting on the one hand of the HERA proton ring, and on the other hand of a $`e^+e^{}`$ linear collider (LC). DESY proceeds towards a proposal for such a linear collider, which would have a centre of mass system (CMS) energy of $`0.51.`$ TeV. It is planned to include the possibility to perform $`ep`$ collisions, by constructing the LC tangential to HERA, allowing for an interaction region in the HERA West hall. The kinematics and beam dynamics have been discussed in . The polarisation of the electron beam would be sufficiently large (about 80%). If also the proton beam is polarised, polarised $`ep`$ scattering can be studied at a CMS of about 1 TeV, allowing to study the polarised parton distributions at an order of magnitude lower in $`x`$ compared to HERA. In the gain for $`g_1`$ is discussed. Here we show the asymmetries (in LO) for the di-jets, using the same jet selection criteria as used for the HERA study, for collisions of 820 GeV protons on 800 GeV electrons, possibly the maximum which can be expected. The error bars correspond to 1000 pb<sup>-1</sup>, but the polarisation of the beams is assumed to be 100%. Events and jets are selected within the pseudorapidity range from $`4`$ to $`3.5`$ for the jets and from $`7`$ to $`3.5`$ for the scattered electron, $`Q^2>1`$ GeV<sup>2</sup>, $`p_t^{jets}>5`$ GeV, and $`\widehat{s}>100`$ GeV<sup>2</sup>. The asymmetries are shown for two upper limits on $`Q^2`$. In the figures on the right the low-$`x`$ region is shown explicitly, and the gain in $`x`$-range with respect to nominal HERA is given by the shaded area. The measurement reflects the decreasing asymmetries with decreasing $`x`$. The asymmetries at very low $`x`$ become very small. The lowest values of $`x`$ where a significant measurement of the di-jet asymmetry can be made with TESLA-HERA will be about $`x=0.0005`$. However a large statistics sample $`𝒪(1)`$ fb<sup>-1</sup> and an excellent control of systematic errors will be needed. ## 6 Conclusion The direct measurement of $`\mathrm{\Delta }G(x)`$ via di-jet production has been studied at NLO, using the MEPJET program. The asymmetries are reduced with respect to the LO case, but a sufficiently large sensitivity to the polarised gluon distribution can be obtained in the region $`0.005<x_g<0.4`$ for luminosities larger than 200 pb<sup>-1</sup>. This measurement can be extended by roughly an order of magnitude to lower $`x`$ with future polarised $`ep`$ collisions using TESLA and HERA. ### Acknowledgements G.R. thanks Vernon Hughes and the U.S. Department of Energy for financial support.
no-problem/9909/chao-dyn9909014.html
ar5iv
text
# Unification of perturbation theory, RMT and semiclassical considerations in the study of parametrically-dependent eigenstates ## Abstract We consider a classically chaotic system that is described by an Hamiltonian $`(Q,P;x)`$ where $`x`$ is a constant parameter. Our main interest is in the case of a gas-particle inside a cavity, where $`x`$ controls a deformation of the boundary or the position of a ‘piston’. The quantum-eigenstates of the system are $`|n(x)`$. We describe how the parametric kernel $`P(n|m)=|n(x)|m(x_0)|^2`$ evolves as a function of $`\delta x=xx_0`$. We explore both the perturbative and the non-perturbative regimes, and discuss the capabilities and the limitations of semiclassical as well as of random-waves and random-matrix-theory (RMT) considerations. Consider a system that is described by an Hamiltonian $`(Q,P;x)`$ where $`(Q,P)`$ are canonical variables and $`x`$ is a constant parameter. Our main interest is in the case where the parameter $`x`$ represents the position of a small rigid body (‘piston’) which is located inside a cavity, and the $`(Q,P)`$ variables describe the motion of a ‘gas particle’. It is assumed that the system is classically chaotic. The eigenstates of the quantized Hamiltonian are $`|n(x)`$ and the corresponding eigen-energies are $`E_n(x)`$. The eigen-energies are assumed to be ordered, and the mean level spacing will be denoted by $`\mathrm{\Delta }`$. We are interested in the parametric kernel $`P(n|m)=|n(x)|m(x_0)|^2=\text{trace}(\rho _n\rho _m)`$ (1) In the equation above $`\rho _m(Q,P)`$ and $`\rho _n(Q,P)`$ are the Wigner functions that correspond to the eigenstates $`|m(x_0)`$ and $`|n(x)`$ respectively. The trace stands for $`dQdP/(2\pi \mathrm{})^d`$ integration. The difference $`xx_0`$ will be denoted by $`\delta x`$. We assume a dense spectrum. The kernel $`P(n|m)`$, regarded as a function of $`nm`$, describes an energy distribution. As $`\delta x`$ becomes larger, the width as well as the whole profile of this distribution ‘evolves’. Our aim is to study this parametric-evolution (PE). The understanding of PE is essential for the analysis of experimental circumstances where the ‘sudden approximation’ applies . It also constitutes a preliminary stage in the studies of quantum dissipation . The function $`P(n|m)`$ has received different names such as ‘strength function’ and ‘local density of states’ . Some generic features of PE can be deduced by referring to time-independent first-order perturbation theory (FOPT), and to random-matrix-theory (RMT) considerations . Other features can be deduced using classical approximation , or its more controlled version that we are going to call phase-space semiclassical approximation . Still another strategy is to use time-domain semiclassical considerations . In case of cavities one can be tempted to use ‘random-wave’ considerations as well. Depending on the chosen strategy, different results can be obtained. The ‘cavity’ system is a prototype example for demonstrating the ‘clash’ between the various approaches to the problem. We are considering the cavity example where we have a ‘gas’ particle whose kinetic energy is $`E=\frac{1}{2}mv^2`$, where $`m`$ is its mass, and $`v`$ is its velocity. The ‘gas’ particle is moving inside a cavity whose volume is $`𝖵`$ and whose dimensionality is $`d`$. The ballistic mean free path is $`\mathrm{}_{\text{bl}}`$. The area of the displaced wall-element (‘piston’ for brevity) is $`A`$, while its effective area is $`A\text{eff}`$, see for geometrical definition. The mean free path $`\mathrm{}_{\text{col}}𝖵/A`$ between collisions with the piston may be much smaller compared with $`\mathrm{}_{\text{bl}}`$. The penetration distance upon a collision is $`\mathrm{}=E/f`$, where $`f`$ is the force that is exerted by the wall. Upon quantization we have an additional length scale, which is the De-Broglie wavelength $`\lambda _\text{B}=2\pi \mathrm{}/(mv)`$. We shall distinguish between the hard walls case where we assume $`\mathrm{}<\lambda _\text{B}\mathrm{}_{\text{bl}}`$, and soft walls for which $`\lambda _\text{B}\mathrm{}`$. Note that taking $`\mathrm{}0`$ implies soft walls. For convenience of the reader we start by listing the various expressions that can be derived for $`P(n|m)`$, along with an overview of our PE picture. Then we proceed with a detailed presentation. We are going to argue that four parametric scales $`\delta x_c^{\text{qm}}\delta x_{\text{NU}}\delta x_{\text{prt}}\delta x_{\text{SC}}`$ are important in the the study of PE. Standard FOPT assumes that $`P(n|m)`$ has a simple perturbative structure that contains mainly one state: $`P(n|m)\delta _{nm}+\text{Tail}(nm)`$ (2) We define $`\delta x_c^{\text{qm}}`$ to be the parametric change that is required in order to mix neighboring levels. For $`\delta x>\delta x_c^{\text{qm}}`$ an improved version of FOPT implies that $`P(n|m)`$ has a core-tail structure : $`P(n|m)\text{Core}(nm)+\text{Tail}(nm)`$ (3) The core consists of those levels that are mixed non-perturbatively, and the tail evolves as if standard FOPT is still applicable. In particular we argue that the tail grows like $`\delta x^2`$, and not like $`\delta x`$. We also explain how the core-width depends on $`\delta x`$. It should be noted that Wigner’s Lorentzian can be regarded as a special case of core-tail structure. Another strategy is to use semiclassical considerations. The simplest idea is to look on the definition (1) and to argue that $`\rho _n(Q,P)`$ and $`\rho _m(Q,P)`$ can be approximated by microcanonical distributions. This is equivalent to the classical approximation that has been tested in . If we try to apply this approximation to the cavity example we should be aware of a certain complication that is illustrated in Fig.1. One obtains $`P(n|m)=\left(1{\displaystyle \frac{\tau _{\text{cl}}}{\tau _{\text{col}}}}\right)\delta (nm)+\text{S}\left({\displaystyle \frac{E_nE_m}{\delta E_{\text{cl}}}}\right)`$ (4) The detailed explanation of this expression is postponed to a later paragraph. A more careful semiclassical procedure is to take the width of Wigner function into account. Namely, we can approximate $`\rho _n(Q,P)`$ and $`\rho _m(Q,P)`$ by smeared microcanonical distributions. It can be used in order to get an idea concerning the quantum mechanical ‘interpretation’ of the Dirac’s delta function component in (4). The result is $`\delta (nm){\displaystyle \frac{1}{\pi }}{\displaystyle \frac{\delta E_{\text{SC}}}{\delta E_{\text{SC}}^2+(E_nE_m)^2}}`$ (5) with $`\delta E_{\text{SC}}=\mathrm{}/\tau _{\text{bl}}`$, where $`\tau _{\text{bl}}=\mathrm{}_{\text{bl}}/v`$. However, we are going to argue that the latter procedure, which is equivalent to the assumption of having uncorrelated random waves, is an over-simplification. It is better to use the time-domain semiclassical approach which is based on the realization that $`P(n|m)`$ is related to the so-called survival amplitude via a Fourier transform , leading to the identification $`\delta E_{\text{SC}}=\mathrm{}/\tau _{\text{col}}`$, where $`\tau _{\text{col}}=\mathrm{}_{\text{col}}/v`$. The important point to realize is that (4) with (5) is fundamentally different from either (2) or (3). The main purpose of this Letter is to give a clear idea of the route from the regime where perturbation theory applies to the non-perturbative regime regime where semiclassical consideration become useful. We are going to explain that the width of the core in (2) defines a ‘window’ through which we can view the ‘landscape’ of the semiclassical analysis. As $`\delta x`$ becomes larger, this ‘window’ becomes wider, and eventually some of semiclassical structure is exposed. This is marked by the non-universal parametric scale $`\delta x_{\text{NU}}`$. For $`\delta x`$ much larger than $`\delta x_{\text{NU}}`$, the non-universal structure (5) of the core is exposed. Still, the perturbative tail of (3) may survive for relatively large values of $`\delta x`$. One wonders whether this tail survives for arbitrarily large $`\delta x`$. While the answer for the latter question may be positive for hard walls, it is definitely negative for soft walls, as well as for any other generic system. Assuming soft walls, one should realize that the perturbative tail of (3) occupies a finite bandwidth. It is well known that having finite bandwidth is a generic feature of all quantized systems, provided $`\mathrm{}`$ is reasonably small. Therefore one should introduce an additional parametric scale $`\delta x_{\text{prt}}`$. For $`\delta x\delta x_{\text{prt}}`$ the core spills over the bandwidth of the perturbative tail, and $`P(n|m)`$ becomes purely non-perturbative. The non-perturbative $`P(n|m)`$ does not necessarily correspond to the classical approximation (4). We are going to introduce one more additional scale $`\delta x_{\text{SC}}`$. For $`\delta x\delta x_{\text{SC}}`$ detailed quantal-classical correspondence is guaranteed, and (4) with (5) becomes applicable. Expression (2) is a straightforward result of standard time-independent FOPT where $`\text{Tail}(nm)`$ $`=`$ $`\left|\left({\displaystyle \frac{}{x}}\right)_{nm}\right|^2{\displaystyle \frac{\delta x^2}{(E_nE_m)^2}}`$ (6) An estimate for the matrix elements $`(/x)_{nm}`$ follows from simple considerations . Upon substitution into (6) it leads to: $`P(n|m)\left({\displaystyle \frac{\delta x}{\delta x_c^{\text{qm}}}}\right)^\beta {\displaystyle \frac{1}{(nm)^{2+\gamma }}}`$ (7) for $`b(x)|nm|b`$ (8) with $`\beta =2`$ and $`b(x)=0`$. We have defined $`\delta x_c^{\text{qm}}\sqrt{{\displaystyle \frac{\mathrm{\Gamma }((d+3)/2)}{4\pi ^{(d1)/2}}}{\displaystyle \frac{1}{A_{\text{eff}}}}\lambda _\text{B}^{d+1}}`$ (9) We shall refer to the dependence of $`|(/x)_{nm}|^2`$ on $`nm`$ as the band-profile. It is well known that the band-profile is related (via a Fourier transform) to a classical correlation function. If successive collisions with the ’piston’ are uncorrelated then we have $`\gamma =0`$. But in other typical circumstances we may have $`0<\gamma `$. The matrix $`(/x)_{nm}`$ is not a banded matrix unless we assume soft (rather than hard) walls. In the latter case the bandwidth $`\mathrm{\Delta }_b=(\mathrm{}/\tau _{\text{cl}})`$ is related to the collision time $`\tau _{\text{cl}}=\mathrm{}/v`$ with the walls. Having hard walls ($`\mathrm{}<\lambda _\text{B}`$), implies that $`\mathrm{\Delta }_b`$ becomes (formally) larger than $`E`$. The notion of bandwidth is meaningful only for soft walls ($`\mathrm{}\lambda _\text{B}`$). In dimensionless units the bandwidth it is commonly denoted by $`b=\mathrm{\Delta }_b/\mathrm{\Delta }`$. The standard result (2) with (6) of FOPT is valid as long as $`\delta x\delta x_c^{\text{qm}}`$. Once $`\delta x`$ becomes of the order of $`\delta x_c^{\text{qm}}`$, we expect few levels to be mixed non-perturbatively. Consequently (for $`\delta x>\delta x_c^{\text{qm}}`$) the standard version of FOPT breaks down. As $`\delta x`$ becomes larger, more and more levels are being mixed non-perturbatively, and it is natural to distinguish between core and tail regions. The core-width $`b(x)`$ is conveniently defined as the participation ratio (PRR), namely $`b(x)=(_n(P(n|m))^2)^1`$. The tail consists of all the levels that become ‘occupied’ due to first-order transitions from the core. It extends within the range $`b(x)<|nm|<b`$. Most of the spreading probability is contained within the core region, which implies a natural extension of FOPT: The first step is to make a transformation to a new basis where transitions within the core are eliminated; The second step is to use FOPT (in the new basis) in order to analyze the core-to-tail transitions. Details of this procedure are discussed in , and the consequences have been tested numerically . The most important (and non-trivial) consequence of this procedure is the observation that mixing on small scales does not affect the transitions on large-scales. Therefore we have in the tail region $`P(n|m)\delta x^2`$ rather than $`P(n|m)\delta x`$. The above considerations can be summarized by stating that (8) holds with $`\beta =2`$ well beyond the breakdown of the standard FOPT. We turn now to discuss the non-perturbative structure of the core. The identification of $`b(x)`$ with the inverse-participation-ratio is a practical procedure as long as we assume a simple energy spreading profile where the core is characterized by a single width-scale. As long as this assumption (of having structure-less core) is true we can make one step further and argue that $`b(x)|_{\text{PRR}}=2\pi ^2\left({\displaystyle \frac{\delta x}{\delta x_c^{\text{qm}}}}\right)^{2/(1+\gamma )}\text{assuming }|\gamma |<1`$ (10) The argument goes as follows: Assuming that there is only one relevant energy scale ($`b(x)`$) it is implied by (8) that $`P(n|m)`$ has the normalization $`(\delta x/\delta x_c^{\text{qm}})^2/(b(x))^{1+\gamma }`$. This should be of order unity. Hence (10) follows. The tail should go down fast enough ($`\gamma >1`$) else our ‘improved’ perturbation theory does not hold. Namely, for $`\gamma <1`$ the core-width becomes cutoff dependent (via its definition as an PRR), and consequently it is not legitimate to neglect the ‘back reaction’ for core-to-tail transitions. The tail should go down slow enough ($`\gamma <1`$) in order to guarantee that the core width is tail-determined. Else, if $`\gamma >1`$ then the core width is expected to be determined by transitions between near-neighbor levels leading to a simple linear behavior $`b(x)=(\delta x/\delta x_c^{\text{qm}})`$. Non-perturbative features of $`P(n|m)`$ are associated with the structure of the core. In order to further analyze the non-perturbative features of $`P(n|m)`$ we are going to apply semiclassical considerations. An eigenstate $`|n(x)`$ can be represented by a Wigner function $`\rho _n(Q,P)`$. In the classical limit $`\rho _n(Q,P)`$ is supported by the energy surface $`(Q,P;x)=E_n`$. However, unlike microcanonical distribution, it is further characterized by a non-trivial transverse structure. One should distinguish between the ‘bulk’ flat-portions of the energy-surface (where $`Q`$ describes free motion), and the relatively narrow curved-portions (where $`Q`$ is within the wall field-of-force). In the curved-portion of the energy surface (near the turning points), Wigner function has a transverse Airy structure whose ‘thickness’ is characterized by the energy scale $`\mathrm{\Delta }_{\text{SC}}=((\mathrm{}/\tau _{\text{cl}})^2E)^{1/3}`$. This latter expression is valid for soft walls ($`\lambda _\text{B}\mathrm{}`$). In the hard wall case ($`\mathrm{}<\lambda _\text{B}`$) it goes to $`\mathrm{\Delta }_{\text{SC}}E`$. Unlike the curved-portions, the ‘bulk’ flat-portions of the energy surface are characterized by $`\mathrm{\Delta }_{\text{SC}}=(\mathrm{}/\tau _{\text{bl}})`$. Now we consider two sets of eigenstates, $`|n(x)`$ and $`|m(x_0)`$, which are represented by two sets of Wigner functions $`\rho _n(Q,P)`$ and $`\rho _m(Q,P)`$. The probability kernel (1) can be written as $`P(n|m)=\text{trace}(\rho _n\rho _m)`$. If $`\rho _n(Q,P)`$ and $`\rho _m(Q,P)`$ are approximated by microcanonical distributions, then $`P(n|m)`$ is just the projection of the energy surface that correspond to $`m`$, on the “new” energy surface that correspond to $`n`$. This leads to the classical approximation Eq.(4). In the classical limit $`n`$ and $`m`$ become continuous variables, and Dirac’s delta just reflects the observation that most of the energy surface (the ‘bulk’ component) is not affected by changing the position of the classically-small ‘piston’. The second term in (4) has the normalization $`(\tau _{\text{cl}}/\tau _{\text{col}})`$, and corresponds to the tiny component which is affected by the displacement of the ‘piston’. For $`\delta x<\mathrm{}`$ it extends over an energy range $`\delta E_{\text{cl}}=f\delta x`$, where $`f`$ is the force which is exerted on the particle by the wall. When $`\delta x`$ becomes larger than $`\mathrm{}`$ the energy spread becomes of order $`E`$. In the quantum-mechanical case we should wonder whether (4) can be used as an approximation, and what is the proper ‘interpretation’ of Dirac’s delta function. It is relatively easy to specify sufficient condition for the validity of the classical approximation. Namely, the transverse structure of Wigner function can be ignored if $`\mathrm{\Delta }_{\text{SC}}|E_nE_m|\delta E_{\text{cl}}`$. For hard walls $`\mathrm{\Delta }_{\text{SC}}E`$ and therefore the classical approximation becomes inapplicable. For soft walls the necessary condition $`\mathrm{\Delta }_{\text{SC}}\delta E_{\text{cl}}`$ is satisfied provided $`\delta x`$ is large enough. Namely $`\delta x\delta x_{\text{SC}}`$, with $`\delta x_{\text{SC}}=(\mathrm{}\lambda _\text{B}^2)^{1/3}`$. We want to go beyond the classical approximation, and to understand how the classical Dirac’s delta function in (4) manifests itself in the quantum mechanical case. Thus we are interested in the singular overlap of the ‘bulk’ components (see Fig.1), and the relevant $`\mathrm{\Delta }_{\text{SC}}`$ for the current discussion is $`\mathrm{}/\tau _{\text{bl}}`$. The most naive guess is that the contribution due to the overlap of ‘bulk’ components becomes non-zero once $`|E_nE_m|<\mathrm{\Delta }_{\text{SC}}`$. Equivalently, one may invoke a ‘random-wave’ assumption: One may have the idea that $`|n(x)`$ and $`|m(x_0)`$ can be treated as uncorrelated random-superpositions of plane-waves. Adopting the random-wave assumption, it is technically lengthy but still straightforward to derive (4) with $`\delta E_{\text{SC}}=\mathrm{}/\tau _{\text{bl}}`$. The naive phase-space argument that supports the ‘random wave’ result (5) is definitely wrong. One should realize that $`|E_nE_m|<\mathrm{\Delta }_{\text{SC}}`$ is a necessary rather than a sufficient condition for having a non-vanishing ‘bulk’ contribution. This latter observation becomes evident if one considers the trivial case $`\delta x=0`$ for which we should get $`P(n|m)=0`$ for any $`nm`$. Thus $`\mathrm{}/\tau _{\text{bl}}`$ should be regarded as an upper limit for $`\delta E_{\text{SC}}`$. We are going to argue that the correct result (for large enough $`\delta x`$) is indeed (5), but $`\tau _{\text{bl}}`$ should be replaced by the possibly much larger length-scale $`\tau _{\text{col}}`$. In order to go beyond the random-wave assumption we use the time-domain semiclassical approach which is based on the realization that $`P(n|m)`$ is related to the so-called survival amplitude via a Fourier transform : $`{\displaystyle \underset{n}{}}P(n|m)2\pi \delta (\omega \frac{E_n}{\mathrm{}})=𝒯m|\mathrm{exp}(i\frac{t}{\mathrm{}})|m`$ (11) Note that $`|m`$ is an eigenstate of $`(Q,P;x_0)`$ while $`=(Q,P;x)`$. The knowledge of the short time dynamics, via classical considerations, can be used to obtain the ‘envelope’ of $`P(n|m)`$. Adopting Wigner’s picture, the evolving $`|m`$ in the right hand side of (11) is represented by an evolving (quasi) distribution $`\rho _m(Q,P;t)`$. Let us assume that the ‘piston’ is small, such that the collision rate with it ($`1/\tau _{\text{col}}`$) is much smaller than $`1/\tau _{\text{bl}}`$. Due to the chaotic nature of the motion successive collisions with the piston are uncorrelated. It follows that the portion of $`\rho _m(Q,P;t)`$ which is not affected by collisions with the ‘piston’ decays exponentially as $`\mathrm{exp}(t/\tau _{\text{col}})`$. It is reasonable to assume that any scattered portion of $`\rho _m(Q,P;t)`$ lose phase-correlation with the unscattered portion. Therefore the right hand side of (11) is the Fourier transform of an exponential. Consequently $`P(n|m)`$ should have the Lorentzian shape (5), but the correct energy-width is $`\delta E_{\text{SC}}=\mathrm{}/\tau _{\text{col}}`$ rather than $`\mathrm{}/\tau _{\text{bl}}`$. For an extremely small parametric change such that $`\delta x\delta x_c^{\text{qm}}`$ we have the simple perturbative structure (2). Then, for larger values of $`\delta x`$ the energy distribution develops a core. As long as this core is structure-less it is characterized by the single width-scale $`b(x)`$ of (10). Now we would like to define a new parametric scale $`\delta x_{\text{NU}}`$. By definition, for $`\delta x\delta x_{\text{NU}}`$ non-universal features manifest themselves, and the core is characterized by more than one width-scale. For our ‘cavity’ example this happens once the semiclassical Lorentzian structure (5) is exposed. This happens when $`b(x)`$ of (10) becomes larger than $`\delta E_{\text{SC}}/\mathrm{\Delta }`$, leading to $`\delta x_{\text{NU}}={\displaystyle \frac{1}{4\pi }}\left((d+1){\displaystyle \frac{A}{A_{\text{eff}}}}\right)^{1/2}\lambda _\text{B}`$ (12) Let us re-emphasize that the semiclassical argument that is based on (11) applies to the non-universal parametric regime $`\delta x\delta x_{\text{NU}}`$, where the semiclassical Lorentzian structure (5) is exposed. It is also important to realize that in the non-universal regime we do not have a theory for the $`b(x)`$ of (8). The derivation of (10) is based on the assumption of having a structure-less core, and therefore pertains only to the universal regime. It is well known that for any quantized system $`(/x)_{nm}`$ is characterized, for sufficiently small $`\mathrm{}`$, by a finite bandwidth $`\mathrm{\Delta }_b`$. Consequently it is possible to define a non-perturbative regime $`\delta x\delta x_{\text{prt}}`$, where the condition $`b(x)b`$ is violated. In the non-perturbative regime expression (8) becomes inapplicable because the core spills over the (perturbative) tail region. Thus $`P(n|m)`$ becomes purely non-perturbative. Hard walls are non-generic as far as the above semiclassical considerations are concerned. In the proper classical limit all the classical quantities should be held fixed (and finite), while making $`\mathrm{}`$ smaller and smaller. Therefore the proper classical limit implies soft walls ($`\lambda _\text{B}\mathrm{}`$), leading to finite bandwidth $`\mathrm{\Delta }_b=\mathrm{}/\tau _{\text{cl}}`$. From (10) it follows that the condition $`b(x)b`$ is definitely not violated for $`\delta x\delta x_{\text{NU}}`$. Hence we conclude that $`\delta x_{\text{prt}}\delta x_{\text{NU}}`$, but we cannot give an explicit expression since (10) becomes non-valid in the non-universal regime. In the parametric regime $`\delta x_{\text{NU}}\delta x\delta x_{\text{prt}}`$ we have on the one hand $`\delta E_{\text{cl}}\mathrm{\Delta }_b`$, and on the other hand $`b(x)b`$ by definition. Therefore we cannot get in this regime a contribution that corresponds to the second term in (4). A necessary condition for the manifestation of this second term is $`\delta x\delta x_{\text{prt}}`$. However, it should be realized that having $`\delta x\delta x_{\text{prt}}`$ is not a sufficient condition for having detailed correspondence with the classical approximation. For our ‘cavity’ example detailed correspondence means that the whole classical structure of (4) is exposed. As discussed previously, the sufficient condition for having such detailed correspondence is $`\delta x\delta x_{\text{SC}}`$. This latter condition is always satisfied in the limit $`\mathrm{}0`$. We thank Michael Haggerty for his useful comments, and ITAMP for their support.
no-problem/9909/math9909095.html
ar5iv
text
# Theorem 1 Spaces of Knots Allen Hatcher Classical knot theory is concerned with isotopy classes of knots in the 3±sphere, in other words, path-components of the space $`𝒦`$ of all smooth submanifolds of $`S^3`$ diffeomorphic to the circle $`S^1`$. What can be said about the homotopy types of these various path-components? One would like to find, for the path-component $`𝒦_K`$ containing a given knot $`K`$, a small subspace $`_K`$ to which $`𝒦_K`$ deformation retracts, thus a minimal homotopic model for $`𝒦_K`$. In this paper we describe a reasonable candidate for $`_K`$ and prove for many knots $`K`$ that $`𝒦_K`$ does indeed have the homotopy type of the model $`_K`$. The proof would apply for all $`K`$ provided that a certain well-known conjecture in 3±manifold theory is true, the conjecture that every free action of a finite cyclic group on $`S^3`$ is equivalent to a standard linear action. The model $`_K`$ takes a particularly simple form if $`K`$ is either a torus knot or a hyperbolic knot. In these cases $`_K`$ is a single orbit of the action of $`SO(4)`$ on $`𝒦_K`$ by rotations of the ambient space $`S^3`$, namely an orbit of a “maximally symmetric” position for $`K`$, a position where the subgroup $`G_KSO(4)`$ leaving $`K`$ setwise invariant is as large as possible. The orbit is thus the coset space $`SO(4)/G_K`$. The assertion that $`𝒦_K`$ deformation retracts to $`_K=SO(4)/G_K`$ is then a sort of homotopic rigidity property of $`K`$. Let us describe these models in more detail. Consider first the case of the trivial knot. Its most symmetric position is clearly a great circle in $`S^3`$. The subgroup $`G_KSO(4)`$ taking $`K`$ to itself is then the index two subgroup of $`O(2)\times O(2)`$ consisting of orientation-preserving isometries. It was shown in \[H1\] that $`𝒦_K`$ has the homotopy type of the orbit $`_K=SO(4)/G_K`$, which can be identified with the $`4`$±dimensional Grassmann manifold of $`2`$±planes through the origin in $`\text{}^4`$. Consider next a nontrivial torus knot $`K=K_{p,q}`$ for relatively prime integers $`p`$ and $`q`$, neither of which is $`\pm 1`$. Regarding $`S^3`$ as the unit sphere in $`\text{}^2`$, the most symmetric position for $`K`$ is as the set of points $`(z^p,z^q)/\sqrt{2}`$ with $`|z|=1`$. Its symmetry group then contains the unitary diagonal matrices with $`z^p`$ and $`z^q`$ as diagonal entries, forming a subgroup $`S^1SO(4)`$. There is also a rotational symmetry reversing the orientation of $`K`$, given by complex conjugation in each variable. Thus $`G_K`$ contains a copy of $`O(2)`$, whose restriction to $`K`$ is the usual action of $`O(2)`$ on $`S^1`$. It is easy to see that $`G_K`$ cannot be larger than this since if it were, $`K`$ would be pointwise fixed by a nontrivial element of $`SO(4)`$, hence would be unknotted. We will show that $`𝒦_K`$ has the homotopy type of the orbit $`SO(4)/G_K`$, a closed $`5`$±manifold. This may have been known for some time, but there does not seem to be a proof in the literature. Now let $`K`$ be hyperbolic, so $`S^3K`$ has a unique complete hyperbolic structure, and let $`\mathrm{\Gamma }_K`$ be the finite group of orientation-preserving isometries of this hyperbolic structure. An easy argument in hyperbolic geometry shows that elements of $`\mathrm{\Gamma }_K`$ must take meridians of $`K`$ to meridians, so the action of $`\mathrm{\Gamma }_K`$ on $`S^3K`$ extends to an action on $`S^3`$. By the Smith conjecture \[MB\], no nontrivial elements of $`\mathrm{\Gamma }_K`$ fix $`K`$ pointwise, so $`\mathrm{\Gamma }_K`$ is a group of diffeomorphisms of $`K`$, hence $`\mathrm{\Gamma }_K`$ must be cyclic or dihedral. Assuming the action of $`\mathrm{\Gamma }_K`$ on $`S^3`$ is equivalent to an action by elements of $`SO(4)`$, then we can isotope $`K`$ to a “symmetric” position in which the action is by isometries of $`S^3`$, so we have an embedding $`\mathrm{\Gamma }_KSO(4)`$. We will show in this case that $`𝒦_K`$ has the homotopy type of $`SO(4)/\mathrm{\Gamma }_K`$. The symmetry group $`G_K`$ cannot be larger than $`\mathrm{\Gamma }_K`$, so in this symmetric position for $`K`$ we have $`G_K=\mathrm{\Gamma }_K`$, and $`SO(4)/\mathrm{\Gamma }_K`$ is the orbit of $`K`$ under the $`SO(4)`$ action. The hypothesis that the action of $`\mathrm{\Gamma }_K`$ on $`S^3`$ is equivalent to an isometric action can be restated as the well-known conjecture that the orbifold $`S^3/\mathrm{\Gamma }_K`$ has a spherical structure, a special case of Thurston’s geometrization conjecture for orbifolds. We call it the linearization conjecture for $`K`$. It is a theorem of Thurston that the geometrization conjecture is true for orbifolds which are not actually manifolds. So the linearization conjecture for $`K`$ is true unless $`\mathrm{\Gamma }_K`$ is a cyclic group acting freely on $`S^3`$. It is also known to be true for free actions by a cyclic group of order two \[L\], a power of two \[M\], or three \[R2\]. It appears to be still unknown for prime orders $`p\pm 5`$. To prove these results we study the space $``$ of smooth embeddings $`f:S^1S^3`$. The space $`𝒦`$ is the orbit space of $``$ under the action of the diffeomorphism group $`\text{Diff}(S^1)`$ by composition in the domain. This is a free action, and the projection $`𝒦`$ is a principal bundle with fiber $`\text{Diff}(S^1)`$. The path-components of $``$ are the oriented knot types, since a standard orientation of $`S^1`$ induces an orientation on the image of each embedding $`S^1S^3`$. A component $`𝒦_K`$ of $`𝒦`$ is the image of one or two components of $``$, depending on whether $`K`$ is invertible or not. The group $`SO(4)`$ also acts on $``$, by composition in the range. This is a free action when $`K`$ is nontrivial, defining a principal bundle $`_K_K/SO(4)`$ for each component $`_K`$ of $``$. We show that $`_K/SO(4)`$ is a $`K(\pi ,1)`$ for all nontrivial $`K`$, and we give a description of the group $`\pi =\pi _K`$. In particular we can deduce that $`_K/SO(4)`$ has the homotopy type of a finite CW complex, and it follows that the same is true for $`_K`$. By contrast, we have only been able to show that $`𝒦_K`$ has finite homotopy type if we assume the linearization conjecture. When $`K`$ is a nontrivial torus knot the group $`\pi _K`$ is trivial, so $`_K/SO(4)`$ is contractible and $`_K`$ has the homotopy type of a single orbit $`SO(4)`$. When $`K`$ is a hyperbolic knot, $`\pi _K`$ is , and it follows that $`_K/SO(4)S^1`$ and $`_KS^1\times SO(4)`$. If the linearization conjecture is true for $`K`$, $`_K`$ has the homotopy type of a single orbit of the action of $`SO(2)\times SO(4)`$ on $`_K`$ by rotations in both domain and range, namely the orbit containing a sufficiently symmetric embedding. Such an orbit has the form $`(SO(2)\times SO(4))/\mathrm{\Gamma }_K^+`$ where $`\mathrm{\Gamma }_K^+`$ is the cyclic subgroup of $`\mathrm{\Gamma }_K`$ consisting of symmetries preserving the orientation of $`K`$. Knots which are not torus knots or hyperbolic knots are satellite knots, and for these the situation becomes more complicated. In particular the homotopic rigidity property of torus knots and hyperbolic knots fails for satellite knots. Modulo the linearization conjecture again, we show that $`𝒦_K`$ has the homotopy type of a model $`_K`$ which is a finite-dimensional manifold of the form $`(X_K\times SO(4))/\mathrm{\Gamma }_K`$ where $`\mathrm{\Gamma }_K`$ is a finite group of “supersymmetries” of $`K`$ and $`X_K`$ is the product of a torus of some dimension and a number of configuration spaces $`C_n`$ of ordered $`n`$-tuples of distinct points in $`\text{}^2`$. These configuration spaces occur only when the satellite structure of $`K`$ involves nonprime knots. When they are present, $`\pi _1𝒦_K`$ involves braid groups, a phenomenon observed first in \[G\] in the case that $`K`$ itself is nonprime. When there are no configuration spaces, $`_K`$ is a closed manifold, but in general there can be no closed manifold model for $`𝒦_K`$ since it may not satisfy Poincaré duality. The space $`X_K`$ appearing in $`_K`$ is determined just by the general form of the satellite structure of $`K`$, while the group $`\mathrm{\Gamma }_K`$ is more delicate, depending strongly on the particular knots appearing in the satellite structure. ($`\mathrm{\Gamma }_K`$ is the quotient of $`\pi _0\text{Diff}^+(S^3\mathrm{rel}\pm K)`$ by the subgroup generated by Dehn twists along essential tori in $`S^3K`$.) See Section 2 of the paper, where the case of satellite knots is treated in detail. 1. Homotopically Rigid Knots As described in the introduction, let $`_K`$ and $`𝒦_K`$ be the components of the spaces of embeddings $`S^1S^3`$ and images of such embeddings, respectively, corresponding to a given knot $`K`$. These two spaces are related via the fibration that defines $`𝒦_K`$ as a quotient space of $`_K`$, $$F_K𝒦_K$$ whose fiber $`F`$ is the diffeomorphism group $`\text{Diff}(S^1)`$ if $`K`$ is invertible, or the orientation-preserving subgroup $`\text{Diff}^+(S^1)`$ if $`K`$ is not invertible. Here “invertible” has its standard meaning of “isotopic to itself with reversed orientation.” Since $`\text{Diff}(S^1)O(2)`$ and $`\text{Diff}^+(S^1)SO(2)`$, the homotopy types of $`_K`$ and $`𝒦_K`$ should be closely related. It happens that $`_K`$ is more directly accessible to our techniques, so we study this first, then apply the results to $`𝒦_K`$. When $`K`$ is nontrivial, the group $`SO(4)`$ acts freely on $`_K`$ by composition in the range, defining a principle bundle $$SO(4)_K_K/SO(4)$$ ###### Theorem 1 If $`K`$ is nontrivial, $`_K/SO(4)`$ is aspherical, i.e., a $`K(\pi ,1)`$. Its fundamental group $`\pi `$ is trivial if $`K`$ is a torus knot, and if $`K`$ is hyperbolic. Hence $`_KSO(4)`$ if $`K`$ is a torus knot, and $`_KS^1\times SO(4)`$ if $`K`$ is hyperbolic. Proof: By restriction of orientation-preserving diffeomorphisms of $`S^3`$ to a chosen copy of the knot $`K`$ we obtain a fiber bundle $$\text{Diff}^+(S^3\mathrm{rel}\pm K)\text{Diff}^+(S^3)_K$$ where “rel $`K`$” indicates diffeomorphisms which restrict to the identity on $`K`$. The fiber bundle property is a special case of the general result that restriction of diffeomorphisms to a submanifold defines a fiber bundle; see \[L\]. When we factor out the action of $`SO(4)`$ on $`\text{Diff}^+(S^3)`$ and $`_K`$ by composition in the range we obtain another fiber bundle $$\text{Diff}^+(S^3\mathrm{rel}\pm K)\text{Diff}^+(S^3)/SO(4)_K/SO(4)$$ By the Smale Conjecture \[H1\], the total space $`\text{Diff}^+(S^3)/SO(4)`$ of this bundle has trivial homotopy groups, hence $`\pi _i(_K/SO(4))\pi _{i+1}\text{Diff}^+(S^3\mathrm{rel}\pm K)`$ for all $`i`$. In fact, since the bundle is a principal bundle with contractible total space, $`_K/SO(4)`$ is a classifying space for the group $`\text{Diff}^+(S^3\mathrm{rel}\pm K)`$. In similar fashion, if $`N`$ is a tubular neighborhood of $`K`$ in $`S^3`$ we have a fibration $$\text{Diff}(S^3\mathrm{rel}\pm N)\text{Diff}^+(S^3\mathrm{rel}\pm K)E(N\mathrm{rel}\pm K)$$ where $`E(N\mathrm{rel}\pm K)`$ is the space of embeddings $`NS^3`$ restricting to the identity on $`K`$. It is a standard fact that $`E(N\mathrm{rel}\pm K)`$ has the homotopy type of the space of automorphisms of the normal bundle of $`K`$ in $`S^3`$. Since $`K`$ is diffeomorphic to $`S^1`$, $`E(N\mathrm{rel}\pm K)`$ thus has the homotopy type of $`\text{}\times S^1`$. The factor measures different choices of a nonzero section of the normal bundle. Since elements of $`\text{Diff}^+(S^3\mathrm{rel}\pm K)`$ must take longitude to longitude, up to isotopy, for homological reasons, we may as well replace the base space $`E(N\mathrm{rel}\pm K)`$ of the bundle by $`S^1`$. Let $`M`$ be the closure of $`S^3N`$, a compact manifold with torus boundary. We can then identify $`\text{Diff}(S^3\mathrm{rel}\pm N)`$ with $`\text{Diff}(M\mathrm{rel}\pm M)`$. It has been known since the early 1980’s that $`\text{Diff}(M\mathrm{rel}\pm M)`$ has contractible components, as a special case of more general results about Haken manifolds \[H2\],\[I\]. Therefore to show that $`\text{Diff}^+(S^3\mathrm{rel}\pm K)`$ has contractible components, hence that $`_K/SO(4)`$ is aspherical, it suffices to verify that in the long exact sequence of homotopy groups for preceding the fibration, the boundary map from $`\pi _1E(N\mathrm{rel}\pm K)`$ to $`\pi _0\text{Diff}(M\mathrm{rel}\pm M)`$ is injective. A generator for the infinite cyclic group $`\pi _1E(N\mathrm{rel}\pm K)`$ is represented by a full rotation of each disk fiber. Under the boundary map this gives a diffeomorphism of $`M`$ supported in a collar neighborhood of $`M`$ which restricts to a standard Dehn twist in each meridional annulus. One can see this is nontrivial in $`\pi _0\text{Diff}(M\mathrm{rel}\pm M)`$ by looking at the induced homomorphism on $`\pi _1(M,x_0)`$ for a basepoint $`x_0\pm M`$. Namely, it is conjugation by a meridian loop, so if the Dehn twist were isotopic to the identity fixing $`M`$, this conjugation would be the trivial automorphism of $`\pi _1(M,x_0)`$ and hence the meridian would lie in the center of $`\pi _1(M,x_0)`$. However, the only Haken manifolds with nontrivial center are the orientable Seifert fiberings, with the fiber generating the center \[BZ\]. But the only knot complements which are Seifert-fibered are torus knots, with the fiber being non-meridional. Thus the Dehn twist is nontrivial in $`\pi _0\text{Diff}(M\mathrm{rel}\pm M)`$, and the same argument shows that any nonzero power of it is also nontrivial, so the boundary map is injective. The next thing to show is that $`\pi _0\text{Diff}^+(S^3\mathrm{rel}\pm K)=0`$ if $`K`$ is a torus knot, or equivalently that $`\pi _0\text{Diff}(M\mathrm{rel}\pm M)`$ is generated by meridional Dehn twists. This can be shown by standard 3±manifold techniques, in the following way. The manifold $`M`$ is Seifert fibered over a disk with two multiple fibers of distinct multiplicities. In particular, $`M`$ is the union of two solid tori intersecting along an annulus $`A`$. Modulo meridional Dehn twists, a diffeomorphism of $`M`$ fixing $`M`$ can be isotoped, staying fixed on $`M`$, so that it takes $`A`$ to itself. The restriction of the diffeomorphism to $`A`$ must be isotopic to the identity $`\mathrm{rel}\pm A`$ since it extends over the solid tori. The argument is completed by appealing to the fact that $`\pi _0\text{Diff}(T\mathrm{rel}\pm T)=0`$ for $`T`$ a solid torus. Suppose now that $`K`$ is hyperbolic. From the long exact sequence of homotopy groups for the restriction fibration $`\text{Diff}(M)\text{Diff}(M)`$ we obtain a short exact sequence $$0\pi _1\text{Diff}(M)\stackrel{}{}\pi _0\text{Diff}(M\mathrm{rel}\pm M)\pi _0\text{Diff}_0(M)0$$ where $`\text{Diff}_0(M)`$ consists of the diffeomorphisms of $`M`$ whose restriction to $`M`$ is isotopic to the identity. Injectivity of the map $``$ was shown two paragraphs above, since the image of this map is generated by longitudinal and meridional Dehn twists near $`M`$. By famous theorems of Waldhausen and Cerf, along with Mostow rigidity, we have $`\pi _0\text{Diff}(M)Isom(M)`$, the finite group of hyperbolic isometries of $`M`$. The group $`\pi _0\text{Diff}_0(M)`$ is a subgroup of this, the isometries whose restriction to the cusp torus is a rotation. Since hyperbolic isometries are locally determined, each isometry in $`\pi _0\text{Diff}_0(M)`$ is uniquely determined by its restriction rotation of $`M`$. We can see that $`\pi _0\text{Diff}(M\mathrm{rel}\pm M)`$ is isomorphic to $`\text{}\times \text{}`$ by the following argument from \[HM\]. The fiber $`\text{Diff}(M\mathrm{rel}\pm M)`$ of the map $`\text{Diff}(M)\text{Diff}(M)`$ has the same homotopy groups as the homotopy fiber, whose points are pairs consisting of a diffeomorphism of $`M`$ together with an isotopy of its restriction to $`M`$ to the identity. Thus we can view $`\pi _0\text{Diff}(M\mathrm{rel}\pm M)`$ as rotations of $`M`$ with a homotopy class of isotopies of the rotation to the identity. This is the same as a lift of the rotation to a translation of the universal cover $`\text{}^2`$. Thus we can identify $`\pi _0\text{Diff}(M\mathrm{rel}\pm M)`$ with a group of translations of $`\text{}^2`$ containing the group $`\pi _1\text{Diff}(M)`$ of deck transformations as a finite-index subgroup. Hence $`\pi _0\text{Diff}(M\mathrm{rel}\pm M)\text{}\times \text{}`$. To show that $`\pi _0\text{Diff}^+(S^3\mathrm{rel}\pm K)\text{}`$ we need to see that a meridional Dehn twist of $`M`$ generates a direct summand of $`\pi _0\text{Diff}(M\mathrm{rel}\pm M)`$. First note that no hyperbolic isometry of $`M`$ can restrict to a purely meridional rotation of $`M`$ since such an isometry would extend to a periodic diffeomorphism of $`S^3`$ fixing $`K`$ pointwise, which is ruled out by the Smith conjecture \[MB\]. Thus a meridional Dehn twist of $`M`$ is not a proper multiple of any other element of $`\pi _0\text{Diff}(M\mathrm{rel}\pm M)\text{}\times \text{}`$. The meridional Dehn twist is then one element of a basis for $`\pi _0\text{Diff}(M\mathrm{rel}\pm M)`$, which is to say that it generates a direct summand. This finishes the proof of the statements about $`_K/SO(4)`$. We deduce the statements about $`_K`$ by looking at the bundle $`_K_K/SO(4)`$. If $`K`$ is a torus knot we have shown that the base space of this bundle has trivial homotopy groups, hence is contractible, so the total space deformation retracts onto a fiber. If $`K`$ is hyperbolic the base space has the homotopy type of $`S^1`$, so the total space has the homotopy type of a principal $`SO(4)`$ bundle over $`S^1`$. Since $`SO(4)`$ is connected, this bundle must be trivial. ± When $`K`$ is hyperbolic, a generator for the summand of $`\pi _1_K`$ can be described as follows. A generator of the cyclic group $`\pi _0\text{Diff}_0(M)`$ is represented by a diffeomorphism which extends to a periodic diffeomorphism $`g:(S^3,K)(S^3,K)`$ preserving orientations of both $`S^3`$ and $`K`$. Let $`g_t:S^3S^3`$ be an isotopy from the identity to $`g=g_1`$. Restricting $`g_t`$ to $`K`$ gives a path of embeddings $`f_t:KS^3`$ with $`f_1(K)=f_0(K)=K`$. The restriction of $`f_1`$ to $`K`$ is a diffeomorphism $`KK`$ isotopic to the identity, say by an isotopy $`h_t`$, so if we follow the isotopy $`f_t`$ by the inverse isotopy $`h_{1t}`$, we obtain a loop of embeddings of $`K`$ in $`S^3`$. This generates the summand of $`\pi _1_K`$. If $`K`$ satisfies the linearization conjecture, as described in the introduction, then the diffeomorphism $`g`$ is conjugate, via an element $`\sigma \pm \text{Diff}^+(S^3)`$, to an element of $`SO(4)`$. Replacing $`K`$ by the equivalent knot $`\sigma (K)`$, we may assume that $`g`$ itself is in $`SO(4)`$. For the isotopy $`g_t`$ we can then choose an arc in an $`S^1`$ subgroup of $`SO(4)`$ containing $`g`$. The isotopy $`f_t`$ in the preceding paragraph is then constant in $`_K/SO(4)`$, so a loop generating $`\pi _1(_K/SO(4))`$ consists of the reparametrizations $`h_t`$ of $`K`$. This implies that $`_K`$ has the homotopy type of the coset space $`(S^1\times SO(4))/\mathrm{\Gamma }_K^+`$ where $`\mathrm{\Gamma }_K^+`$ is the cyclic group generated by $`(g||K,g^1)`$. Factoring out parametrizations, which have the homotopy type of $`O(2)`$ or $`SO(2)`$ depending on whether $`K`$ is invertible or not, we obtain: ###### Corollary If $`K`$ is a torus knot, or a hyperbolic knot satisfying the linearization conjecture, then $`𝒦_K`$ has the homotopy type of $`SO(4)/\mathrm{\Gamma }_K`$. ± For a hyperbolic knot $`KS^3`$ the symmetry group $`G_K`$ is always a subgroup of $`\mathrm{\Gamma }_K`$. This is because the natural map $`G_K\pi _0\text{Diff}^+(S^3K)\mathrm{\Gamma }_K`$ is injective, by the theorem that a periodic diffeomorphism of a Haken manifold which is homotopic to the identity must be part of an $`S^1`$ action (see \[T\],\[FY\]), which happens only when $`S^3K`$ is a Seifert manifold, hence $`K`$ is a torus knot. 2. Satellite Knots (This section is not yet written.) References \[BZ\] G. Burde and H. Zieschang, Eine Kennzeichnung der Torusknoten, Math. Ann. 167 (1966), 169-176. \[FY\] M. Freedman and S.-T. Yau, Homotopically trivial symmetries of Haken manifolds are toral, Topology 22 (1983), 179-189. \[G\] A. Gramain, Sur le groupe fondamental de l’espace des noeuds, Ann. Inst. Fourier 27, 3 (1977), 29-44. \[H1\] A. Hatcher, A proof of the Smale conjecture, Ann. of Math. 117 (1983), 553-607. \[H2\] A. Hatcher, Homeomorphisms of sufficiently large $`P^2`$±irreducible 3±manifolds.Topology 15 (1976), 343-347. For a more recent version, see the paper “Spaces of incompressible surfaces” available on the author’s webpage: http://math.cornell.edu/$`\stackrel{~}{}`$hatcher \[HM\] A. Hatcher and D. McCullough, Finiteness of classifying spaces of relative diffeomorphism groups of 3±manifolds, Geometry and Topology 1 (1997), 91-109. \[I\] N. V. Ivanov, Diffeomorphism groups of Waldhausen manifolds, J. Soviet Math. 12 (1979), 115-118 (Russian original in Zap. Nauk. Sem. Leningrad Otdel. Mat. Inst. Steklov 66 (1976) 172-176. Detailed write-up: Spaces of surfaces in Waldhausen manifolds, Preprint LOMI P-5-80 Leningrad (1980)). \[L\] E. L. Lima, On the local triviality of the restriction map for embeddings, Comment. Math. Helv. 38 (1964), 163-164. \[L\] G. R. Livesay, Fixed point free involutions on the 3±sphere, Ann. of Math. 72 (1960), 603-611. \[MB\] J. W. Morgan and H. Bass, eds., The Smith Conjecture, Academic Press, 1984. \[M\] R. Myers, Free involutions on lens spaces, Topology 20 (1981), 313-318. \[R1\] J. H. Rubinstein, Free actions of some finite groups on $`S^3`$, Math. Ann. 240 (1979), 165-175. \[R2\] J. H. Rubinstein, unpublished. \[T\] J. Tollefson, Homotopically trivial periodic homeomorphisms of 3±manifolds, Ann. of Math. 97 (1973), 14-26.
no-problem/9909/hep-ph9909231.html
ar5iv
text
# NEUTRINOS FROM THE NEXT GALACTIC SUPERNOVA ## 1 Supernova Neutrino Mass Measurements ### 1.1 Current Knowledge About Neutrino Masses What is known about the neutrino masses, in terms of direct experimental evidence? After several decades of experiments on the tritium beta spectrum, the published limit on the mass of the $`\overline{\nu }_e`$ (and hence also $`\nu _e`$) is now about 3 eV . These results are still attended by some controversy, due to apparent systematic errors near the endpoint (discussed in detail at a recent workshop at the Institute for Nuclear Theory in Seattle). Much less is known about the $`\nu _\mu `$ mass ($`<170`$ keV from $`\pi `$ decay) , and the $`\nu _\tau `$ mass ($`<18`$ MeV from $`\tau `$ decay) . It would be a great advance if the latter two limits could be improved to the eV or tens of eV level, and it seems unlikely that any terrestrial experiment could do that. There are two indirect arguments which may restrict the $`\nu _\mu `$ and $`\nu _\tau `$ masses. The first says that the predicted background of relic neutrinos ($`100/\mathrm{cm}^3`$) should not overclose the universe. This leads to a limit on the sum of the neutrino masses of about 100 eV in the most conservative case, and about 10 eV if reasonable values for the non-baryonic dark matter density and the Hubble constant are used . The second is based on the recent evidence for neutrino oscillations (sensitive only to differences of neutrino masses). For three neutrino flavors, there are two independent mass differences, which can be chosen to explain the solar and atmospheric neutrino data. The overall scale can be fixed by using the limit from tritium beta decay, and since the observed mass differences are small, then all masses are below a few eV . These indirect arguments sound compelling, but they are not hard to evade. The cosmological bound doesn’t apply if the neutrinos are allowed to decay. There are many such models for heavy $`\nu _\tau `$ masses, motivated by particle physics or astrophysics . The oscillation bound doesn’t apply if there are more free mass differences than there are positive signals of neutrino oscillations. For example, suppose the atmospheric neutrinos are undergoing maximal $`\nu _\mu \nu _\tau `$ mixing, LSND is ruled out, and the solar neutrinos are undergoing $`\nu _e\nu _{sterile}`$ mixing. Then nothing prevents the $`\nu _2`$ and $`\nu _3`$ masses from having a tiny splitting near 50 or even 500 eV, and leaving the $`\nu _1`$ mass below a few eV. It may be possible in the future to make new astrophysical tests of the neutrino masses, using data on large-scale structure , the cosmic microwave background , weak lensing , and the Lyman-alpha forest . While these techniques claim sensitivities of order 1 - 10 eV for the sum of the neutrino masses, none are based on direct detection of neutrinos, and hence may be vulnerable to uncertainties in their assumptions. Thus while indirect evidence will be valuable, there is still a need for a direct measurement of the $`\nu _\mu `$ and $`\nu _\tau `$ masses (or, in the presence of mixing, the masses of the heavy mass eigenstates). ### 1.2 Supernova Neutrino Emission When the core of a large star ($`M8M_{}`$) runs out of nuclear fuel, it collapses, with a change in the gravitational binding energy of about $`3\times 10^{53}`$ ergs. This huge amount of energy must be removed from the proto-neutron star, but the high density prevents the escape of any radiation. Inside of the hot proto-neutron star, neutrino-antineutrino pairs of all flavors are produced. Despite their weak interactions, even the neutrinos are trapped and diffuse out over several seconds, in the end carrying away about 99% of the supernova energy. (I have neglected the flux of $`\nu _e`$ neutrinos necessary to change the core from $`NZ`$ nuclei into a neutron star, because the total energy release of this flux is of order 1% of the pair emission phase). When the neutrinos are about one mean free path from the edge, they escape freely, with a thermal spectrum (approximately Fermi-Dirac) characteristic of the surface of last scattering. Because different flavors have different interactions with the matter, and because the neutron star temperature is decreasing with increasing radius, the neutrino decoupling temperatures are different. The $`\nu _\mu `$ and $`\nu _\tau `$ neutrinos and their antiparticles have a temperature of about 8 MeV, the $`\overline{\nu }_e`$ neutrinos about 5 MeV, and the $`\nu _e`$ neutrinos about 3.5 MeV. Equivalently, the average energies are about $`E`$ 25 MeV, $`16`$ MeV, and $`11`$ MeV, respectively. The luminosities of the different neutrino flavors are approximately equal at all times. The neutrino luminosity rises quickly over 0.1 s or less, and then falls off over several seconds. The SN1987A data can be reasonably fit to a decaying exponential with time constant $`\tau `$ = 3 s. The detailed form of the neutrino luminosity used below is less important than the general shape features and their characteristic durations. The estimated core-collapse supernova rate in the Galaxy is about 3 times per century . The present neutrino detectors can easily observe a supernova anywhere in the Galaxy or its immediate companions (e.g., the Magellanic Clouds). Unfortunately, the present detectors do not have large enough volumes to observe a supernova in even the nearest galaxy (Andromeda, about 700 kpc away). ### 1.3 Time-of-Flight Concept Even a tiny neutrino mass will make the velocity slightly less than for a massless neutrino, and over the large distance to a supernova will cause a measurable delay in the arrival time. A neutrino with a mass $`m`$ (in eV) and energy $`E`$ (in MeV) will experience an energy-dependent delay (in s) relative to a massless neutrino in traveling over a distance D (in 10 kpc, approximately the distance to the Galactic center) of $$\mathrm{\Delta }t(E)=0.515\left(\frac{m}{E}\right)^2D,$$ (1) where only the lowest order in the small mass has been kept. A distance of 10 kpc corresponds to a travel time of about $`10^{12}`$ s, and the delays of interest are less than 1 s. If the neutrino mass is nonzero, lower-energy neutrinos will arrive later, leading to a correlation between neutrino energy and arrival time. This is exploitable in some of the charged-current detection reactions for $`\nu _e`$ and $`\overline{\nu }_e`$ since the neutrino energy can be reasonably measured. Using this idea, the next supernova will allow sensitivity to a $`\overline{\nu }_e`$ mass down to about 3 eV . However, the terrestrial experiments now claim to rule out a mass that large, so I ignore the $`\nu _e`$ mass in describing the supernova neutrino signal. For measuring the masses of the $`\nu _\mu `$ and $`\nu _\tau `$ neutrinos, one must realize that only neutral-current detection reactions are possible (the charged-current thresholds are too high). Since the neutrino energy is not measured in neutral-current interactions, an energy-time correlation technique cannot be used (the incoming neutrino energy is not determined since a complete kinematic reconstruction of the reaction products is not possible). Instead, the strategy for measuring the $`\nu _\tau `$ mass is to look at the difference in time-of-flight between the neutral-current events (mostly $`\nu _\mu `$,$`\nu _\tau `$,$`\overline{\nu }_\mu `$, and $`\overline{\nu }_\tau `$, because of the higher temperature of those flavors) and the charged-current events (just $`\nu _e`$ and $`\overline{\nu }_e`$). We assume that the $`\nu _\mu `$ is massless and will ask what limit can be placed on the $`\nu _\tau `$ mass (other cases will be discussed below). There are three major complications to a simple application of Eq. (1): (i) The neutrino energies are not fixed, but are characterized by spectra; (ii) The neutrino pulse has a long intrinsic duration of about 10 s, as observed for SN1987A; and (iii) The statistics are finite. In some early work , the duration of the pulse was thought to be of order 100 milliseconds, nearly a delta-function. Here (considering the case of the smallest detectable mass), the delay is much less than the width of the pulse, which is what makes the problem much more difficult (the large-mass case is covered elsewhere ). Given that the energy-time correlation method cannot be used, we must turn to considerations of the shape of the event rate as a function of time. ### 1.4 Neutrino Scattering Rate For a time-independent spectrum $`f(E)`$, and luminosity $`L(t_i)`$, the double-differential number distribution at the source is $$\frac{d^2N_\nu }{dE_\nu dt_i}=f(E_\nu )\frac{L(t_i)}{E_\nu }.$$ (2) The neutrino spectrum and average energy $`E_\nu `$ are different for each flavor, but the luminosities $`L(t_i)`$ of the different flavors are roughly equal , so that each flavor carries away about 1/6 of the total binding energy $`E_B`$. For a massive neutrino, the double-differential number distribution at the detector is $$\frac{d^2N_\nu }{dE_\nu dt}=f(E_\nu )\frac{L(t\mathrm{\Delta }t(E_\nu ))}{E_\nu }$$ (3) and the scattering rate for a given reaction is $$\frac{dN_{sc}}{dt}=C𝑑E_\nu f(E_\nu )\left[\frac{\sigma (E_\nu )}{10^{42}\mathrm{cm}^2}\right]\left[\frac{L(t\mathrm{\Delta }t(E_\nu ))}{E_B/6}\right].$$ (4) The overall constant for an H<sub>2</sub>O target is $$C=9.21\left[\frac{E_B}{10^{53}\mathrm{ergs}}\right]\left[\frac{1\mathrm{MeV}}{T}\right]\left[\frac{10\mathrm{kpc}}{D}\right]^2\left[\frac{\mathrm{det}.\mathrm{mass}}{1\mathrm{kton}}\right]n,$$ (5) where $`n`$ is the number of targets per molecule for the given reaction. For a D<sub>2</sub>O target, the leading factor becomes 8.28 instead of 9.21. For a massless neutrino, $`\mathrm{\Delta }t(E_\nu )=0`$, so the luminosity comes outside of the integral as $`L(t)`$ and completely specifies the time dependence. For a massive neutrino, the time dependence from the luminosity is modified in an energy-dependent way by the mass effects, as above. ### 1.5 Separation of the Massive Signal The scattering rates of the massless neutrinos ($`\nu _e`$ or $`\overline{\nu }_e`$) can then be used to measure the shape of the luminosity as a function of time. The scattering rate for the massive neutrinos ($`\nu _\tau `$) can be tested for additional time dependence due to a mass. We define two rates: a Reference $`R(t)`$ containing only massless events, and a Signal $`S(t)`$ containing some fraction of massive events. The Reference $`R(t)`$ can be formed in various ways, for example from the charged-current reaction $`\overline{\nu }_e+pe^++n`$ in the light water of SK or SNO. (The numbers of events expected for all reactions are given in Table I for SK and Table II for SNO). The Signal $`S(t)`$ can be based on various neutral-current reactions , though here I will focus on the results for SNO . Thus the primary component of the Signal $`S(t)`$ are the 485 neutral-current events on deuterons. With the hierarchy of temperatures assumed here, these events are 18% ($`\nu _e+\overline{\nu }_e`$), 41% ($`\nu _\mu +\overline{\nu }_\mu `$), and 41% ($`\nu _\tau +\overline{\nu }_\tau `$). The flavors of the neutral-current events of course cannot be distinguished. Under our assumption that only $`\nu _\tau `$ is massive, there is already some unavoidable dilution of $`S(t)`$. In Fig. 1, $`S(t)`$ is shown under different assumptions about the $`\nu _\tau `$ mass. The shape of $`R(t)`$ is exactly that of $`S(t)`$ when $`m_{\nu _\tau }=0`$, though the number of events in $`R(t)`$ will be different. The rates $`R(t)`$ and $`S(t)`$ will be measured with finite statistics, so it is possible for statistical fluctuations to obscure the effects of a mass when there is one, or to fake the effects when there is not. We determine the mass sensitivity in the presence of the statistical fluctuations by Monte Carlo modeling. We use the Monte Carlo to generate representative statistical instances of the theoretical forms of $`R(t)`$ and $`S(t)`$, so that each run represents one supernova as seen in SNO. The best model-independent test of a $`\nu _\tau `$ mass seems to be a test of the average arrival time $`t`$. Any massive component in $`S(t)`$ will always increase $`t`$, up to statistical fluctuations. ### 1.6 $`t`$ Analysis Given the Reference $`R(t)`$ (i.e., the charged-current events), the average arrival time is defined as $$t_R=\frac{_kt_k}{_k1},$$ (6) where the sum is over events in the Reference. The effect of the finite number of counts $`N_R`$ in $`R(t)`$ is to give $`t_R`$ a statistical error: $$\delta \left(t_R\right)=\frac{\sqrt{t^2_Rt_R^2}}{\sqrt{N_R}}.$$ (7) For a purely exponential luminosity, $`t_R=\sqrt{t^2_Rt_R^2}=\tau 3`$ s. Given the Signal $`S(t)`$ (i.e., the neutral-current events), the average arrival time $`t_S`$ and its error $`\delta \left(t_S\right)`$ are defined similarly. The signal of a mass is that the measured value of $`t_St_R`$ is greater than zero with statistical significance. Using the Monte Carlo, we analyzed $`10^4`$ simulated supernova data sets for a range of $`\nu _\tau `$ masses. For each data set, $`t_St_R`$ was calculated and its value histogrammed. These histograms are shown in the upper panel of Fig. 2 for a few representative masses. (Note that the number of Monte Carlo runs only affects how smoothly these histograms are filled out, and not their width or placement.) These distributions are characterized by their central point and their width, using the 10%, 50%, and 90% confidence levels. That is, for each mass we determined the values of $`t_St_R`$ such that a given percentage of the Monte Carlo runs yielded a value of $`t_St_R`$ less than that value. With these three numbers, we can characterize the results of complete runs with many masses much more compactly, as shown in the lower panel of Fig. 2. Given an experimentally determined value of $`t_St_R`$, one can read off the range of masses that would have been likely (at these confidence levels) to have given such a value of $`t_St_R`$ in one experiment. From the lower panel of Fig. 2, we see that SNO is sensitive to a $`\nu _\tau `$ mass down to about 30 eV if the SK $`R(t)`$ is used, and down to about 35 eV if the SNO $`R(t)`$ is used. We also investigated the dispersion of the event rate in time as a measure of the mass. A mass alone causes a delay, but a mass and an energy spectrum also causes dispersion (as does the separation of the massive and massless portions of the Signal). We defined the dispersion as the change in the width $`\sqrt{t^2_St_S^2}\sqrt{t^2_Rt_R^2}`$. We found that the dispersion was not statistically significant until the mass was of order 80 eV or so, i.e., a plot very similar to Fig. 2 can be formed for the width, and the error band is much wider. Since the dispersion is irrelevant, the average delay is well-characterized by a single energy, which for SNO is $`E_c32`$ MeV. ### 1.7 Analytic Results Another nice feature of the proposed $`t`$ analysis, besides its simplicity, is that good estimates can be made analytically. The characteristic delay is $$t_St_R\mathrm{frac}(m>0)\times 0.515\left(\frac{m}{E_c}\right)^2D,$$ (8) where $`\mathrm{frac}(m>0)`$ is the fraction (about 1/2) of massive events in the neutral-current Signal. The above numerical analysis shows that the mass effects occur most significantly in the delay and not the width (are characterized with a single energy, and negligible dispersion). That characteristic energy $`E_c`$ is roughly at the peak of $`f(E)\sigma (E)`$ (see Fig. 3). This formula for the first moment is always true; the point of the Monte Carlo analysis was to show that the other moments are not changed, so this completely describes the data. If the cross section $`\sigma (E_\nu )`$ depends on energy as $`E_\nu ^\alpha `$ ($`\alpha 2`$ for $`\nu +d`$), then the characteristic energy $`E_c(2+\alpha )T`$ and the thermally-averaged cross section is proportional to $`T^\alpha `$, where $`T`$ is the $`\nu _\mu `$ and $`\nu _\tau `$ temperature. For $`\alpha =2`$, and a neutron detection efficiency $`ϵ_n`$, the following scaling relations hold: $$t_St_R\left(\frac{m}{T}\right)^2D.$$ (9) $$N_S\frac{1}{T}\frac{1}{D^2}T^2ϵ_n\frac{Tϵ_n}{D^2},$$ (10) $$\delta \left(t_St_R\right)\frac{\tau }{\sqrt{N_S}}\frac{\tau D}{\sqrt{T}\sqrt{ϵ_n}}.$$ (11) For a non-exponential luminosity, $`\tau `$ is more generally the width of the pulse. If zero delay is measured, then the mass limit is determined by the largest positive delay that could have fluctuated to zero, and thus $$m_{limit}T^{3/4}\tau ^{1/2}ϵ_n^{1/4}.$$ (12) Note that is independent of $`D`$ for any supernova distance in the Galaxy (i.e., as long as there is a reasonable number of counts). Note that the mass limit scales with the fourth root of the number of events (via the efficiency $`ϵ_n`$). Besides the scaling, the value of $`m_{lim}`$ at fixed values of these quantities can also be found easily . The analytic results above are in excellent agreement with the full numerical results . Over reasonable ranges of the input parameters, the results hardly change. However, some estimates of the $`\nu _\tau `$ mass sensitivity for proposed detectors assumed a very short duration of the supernova ($`\tau 0.5`$ s). If such a short duration were valid, then the SNO sensitivity would also be of order 10 eV. ### 1.8 How Model-Dependent are the Results? Above, I discussed the proposed $`t`$ technique for analyzing the supernova neutrino data for a $`\nu _\tau `$ mass. This technique was designed to be as model-independent as possible. One underlying theme is to calibrate the assumed models as much as possible from the data . While is is probably a good approximation that the luminosities of the different flavors are equal as a function of time, the normalization of the luminosity drops out in the definition of $`t`$, and hence the results depend on the weaker assumption that the shapes of the luminosities are approximately equal as a function of time. The differences among the different flavors at the very early stages (for example, the higher $`\nu _e`$ flux) persist over a duration much less than 10 s, and hence have no effect on the analysis. Another very important point is that the details of the shape of the luminosity as a function of time are never used, because the delay is measured just from the $`t`$ of the neutral-current events minus the same for the charged-current events. Why is this enough? In general, one could describe the scattering rates of charged-current and neutral-current events by their normalizations and a series of all moments ($`t`$, $`t^2`$, etc.). While this would likely require many terms, it is efficient to consider the difference in the moments between the two. The mass effects do not change the normalizations (the numbers of events), and we have shown that the higher moments (e.g., the width) are not significantly changed by a mass. That is, the only statistically significant effects of a mass on the event rate difference are in the average arrival time $`t`$. If the details of the supernova model are not known from theory, then $`t`$ is what is called a sufficient statistic for the mass . The only aspect of the shape of the luminosity $`L(t)`$ that was used in the analysis was its duration, and this appears only in the error on the delay. Thus the particular assumption of using an exponential luminosity completely disappears from the problem. Any other reasonable model of similar duration would give a very similar result. One also needs to know the $`\nu _\tau `$ temperature to determine the mass limit. Since none of the neutral-current reactions measure the incoming neutrino energy, there is no direct spectral information. The $`\nu _\tau `$ temperature must be determined from the number of events. Since different neutral-current reactions have different energy dependence, one can crudely reconstruct the underlying spectrum (see Fig. 3). The moments analysis of the rates reveals that only the average delay is significant, and thus that just a single energy (the characteristic energy $`E_c`$) is relevant, instead of the fine details of the spectral shape. The final results for the mass sensitivity are given in Table III. ## 2 Supernova Location by Neutrinos ### 2.1 Overview There has been great interest recently in the question of whether or not a supernova can be located by its neutrinos. If so, this may offer an opportunity to give an early warning to the astronomical community, so that the supernova light curves can be observed from the earliest possible time. An international supernova early alert network has been formed for this purpose, and the details of its implementation were the subject of a recent workshop . One of the primary motivations for such a network is to greatly reduce the false signal rate by demanding a coincidence between several different detectors. The second motivation is to locate the supernova by its neutrino signal. The interest in the latter is driven by two considerations. First, the neutrinos leave the core at the time of collapse, while the electromagnetic radiation leaves the envelope some hours later, depending on the stellar mass. Thus the neutrino observations can give the astronomers a head start. Second, the next Galactic supernova will likely be in the disk, and hence obscured by dust, and perhaps never visible optically. There are two classes of techniques to locate a supernova by its neutrinos. The first class of techniques is based on angular distributions of the neutrino reaction products, which can be correlated with the neutrino direction. In this case, a single experiment can independently announce a direction and its error. The second method of supernova location, triangulation, is based on the arrival-time differences of the supernova pulse at two or more widely-separated detectors. This technique would require significant and immediate data sharing among the different experiments. ### 2.2 Angular Distributions #### 2.2.1 Neutrino-Electron Scattering: Electron Angular Distribution In neutrino-electron scattering, the scattered electrons have a very forward angular distribution, due to the small electron mass. At supernova neutrino energies, the angle between the incoming neutrino and the outgoing electron is about $`10^{}`$, depending somewhat on neutrino energy and flavor. However, multiple scattering of the struck electron smears out its Čerenkov cone, washing out the dependence on energy and flavor, and introducing an angular resolution of about $`25^{}`$. Naively, if the one-sigma width of the electron angular distribution is $`25^{}`$, then the precision with which its center (i.e., the average) can be defined given $`N_S`$ events is $$\delta \theta \frac{25^{}}{\sqrt{N_S}},$$ (13) where $`N_S`$ is the number of events. For SK , $`N_S320`$, so the cone center could be defined to within about $`1.5^{}`$. For SNO (using both the light and heavy water), $`N_S25`$, so the cone center could be defined to about $`5^{}`$. The equivalent error on the cosine is $`\delta (\mathrm{cos}\theta )(\delta \theta )^2/2`$, i.e., $`3\times 10^4`$ and $`4\times 10^3`$, respectively. However, the neutrino-electron scattering events are indistinguishable on an event-by-event basis from other reactions with only a single electron or positron detected. The primary background in SK or the light water in SNO is thus $`\overline{\nu }_e+pe^++n`$ (in the heavy water in SNO, there are several background reactions). This problem is similar to the SK solar neutrino studies, in which the neutrino-electron scattering events have to be separated from the intrinsic detector background (due to radioactivity, etc). The SK results are presented as an angular distribution in the cosine along the known direction to the Sun, and clear bump in the solar direction is seen, with signal/noise $`1/10`$. Though the concept is the same, for the supernova events the signal/noise is only $`1/30`$. Taking this into account makes the centroiding error larger by a correction factor of $`4`$ for both SK and SNO. With some cuts on energy, it may be possible to reduce this correction factor to $`23`$. #### 2.2.2 Neutrino-Nucleus Scattering: Lepton Angular Distributions In the charged-current reactions of $`\nu _e`$ and $`\overline{\nu }_e`$ on nucleons or nuclei, the outgoing electrons or positrons typically have angular distributions of the form $$\frac{\mathrm{d}\sigma }{\mathrm{d}\mathrm{cos}\theta }1+v_ea(E_\nu )\mathrm{cos}\theta ,$$ (14) where $`\theta `$ is the angle between the neutrino and electron (or antineutrino and positron) directions in the laboratory (where the nuclear target is assumed to be at rest) and $`v_e`$ is the lepton velocity in $`c=1`$ units. At energies higher than for supernova neutrinos, terms proportional to higher powers of $`\mathrm{cos}\theta `$ appear, and at the highest energies, all reaction products are strongly forward simply by kinematics. Just above threshold, where the lepton velocity is small, the angular distribution obviously becomes isotropic. At supernova neutrino energies, $`v_e=1`$. Typically, the asymmetry coefficient $`a(E_\nu )`$ is not large, so the angular distribution is weak, but the number of events may be large (for $`\overline{\nu }_e+pe^++n`$, there are nearly $`10^4`$ events expected in SK). Thus it may be possible to use these events to locate the supernova. We first consider how well one could localize the supernova assuming that $`a`$ is known and constant. Given a sample of events, one can attempt to find the axis defined by the neutrino direction. Along this axis, the distribution should be flat in the azimuthal angle $`\varphi `$ and should have the form in Eq. (14) in $`\mathrm{cos}\theta `$. Along any other axis, the distribution will be a complicated function of both the altitude and azimuthal angles. We assume that the axis has been found numerically, and ask how well the statistics allow the axis to be defined. A convenient way to assess that is to define the forward-backward asymmetry as $$A_{FB}=\frac{N_FN_B}{N_F+N_B},$$ (15) where $`N_F`$ and $`N_B`$ are the numbers of events in the forward and backward hemispheres. The total number of events is $`N=N_F+N_B`$. Note that $`A_{FB}`$ will assume an extremal value $`A_{FB}^{extr}`$ along the correct neutrino direction. It can be shown that $$\delta A_{FB}=\frac{1}{\sqrt{N}}\sqrt{1\left(\frac{a}{2}\right)^2}\frac{1}{\sqrt{N}},$$ (16) where the error is nearly independent of $`a`$ for small $`|a|`$, which is the case for the reactions under consideration. In the above, the coordinate system axis was considered to be correctly aligned with the neutrino direction. Now consider what would happen if the coordinate system were misaligned. While in general, all three Euler angles would be needed to specify an arbitrary change in the coordinate system, symmetry considerations dictate that the computed value of $`A_{FB}`$ depends only upon one – the angle $`\theta `$ between the true and the supposed neutrino axis. Thus $`A_{FB}`$ is some function of $`\theta `$ if the axis is misaligned. Using a Legendre expansion, one can show that $$A_{FB}(\theta )=\frac{a}{2}\mathrm{cos}\theta .$$ (17) The error on the alignment is then $$\delta (\mathrm{cos}\theta )=\frac{2}{|a|}\delta A_{FB}\frac{2}{|a|}\frac{1}{\sqrt{N}}.$$ (18) Treating the nucleons as infinitely heavy, the coefficient $`a`$ in Eq. (14) is related to the competition of the Fermi (no spin flip) and Gamow-Teller (spin flip) parts of the matrix element squared: $$a=\frac{|M_F|^2|M_{GT}|^2}{|M_F|^2+3|M_{GT}|^2}.$$ (19) Naive use of this formula for $`\overline{\nu }_e+pe^++n`$ gives $`a=0.10`$ for ($`|M_{GT}/M_F|=1.26`$). One expects $`10^4`$ events in SK, so that $`\delta (\mathrm{cos}\theta )0.2`$. Even though the asymmetry parameter $`a`$ is quite small, the number of events is large enough that this technique in SK could give a reasonable pointing error (SNO only has 400 of these events, so $`\delta (\mathrm{cos}\theta )1.0`$, which is too large to be useful). For the charged-current reactions on deuterons, $`\overline{\nu }_e+de^++n+n`$ and $`\nu _e+de^{}+p+p`$, one would have $`a=1/3`$ ($`M_F/M_{GT}0`$). If these channels could be combined (160 events in total), then $`\delta (\mathrm{cos}\theta )0.5`$, which is again rather large. In general, the coefficient $`a(E_\nu )`$ has energy dependence coming from recoil and weak magnetism corrections, each of order $`1/M`$, where $`M`$ is the nucleon mass (such terms also change the total cross section , which has some important applications ). For the reaction $`\overline{\nu }_e+pe^++n`$, these corrections have a dramatic effect on the angular distribution, making it backward at low energies, isotropic at about 15 MeV, and forward at higher energies. Averaged over the expected $`\overline{\nu }_e`$ spectrum from a supernova, one obtains $`\mathrm{cos}\theta =+0.08`$, i.e., a forward distribution. Using the common assumption that the angular distribution is backward (with $`a=0.10`$) would lead one to infer exactly the wrong direction to the supernova. It is convenient to describe the angular distribution by the average cosine, weighted by the differential cross section. In the limit that Eq. (14) holds, $$\mathrm{cos}\theta =\frac{1}{3}v_ea(E_\nu ).$$ (20) For $`\overline{\nu }_e+pe^++n`$, the variation of the average cosine can be written as (keeping only the largest terms of the full expression ): $$\mathrm{cos}\theta 0.034v_e+2.4\frac{E_\nu }{M}.$$ (21) The factor 2.4 is actually $`1.0+1.4`$, the first term from recoil, which always makes the angular distribution more forward, and the second from weak magnetism (which would change sign for the reaction $`\nu _e+ne^{}+p`$, if there were free neutron targets). The variation of $`\mathrm{cos}\theta `$ with $`E_\nu `$ is shown in Fig. 4. Since the deuteron is weakly bound, the recoil and weak magnetism corrections to $`\mathrm{cos}\theta `$ can be reasonably estimated . The effects of recoil and weak magnetism add in the $`\overline{\nu }_e+d`$ channel, and partially cancel in the $`\nu _e+d`$ channel, as seen in Fig. 5. #### 2.2.3 Inverse Beta Decay: Positron-Neutron Separation Vector The results above were based on lepton angular distributions as observed in water-Čerenkov detectors (light or heavy water). Some of the same reactions occur in scintillator detectors, but the lepton angular distributions are not observable due to the isotropic character of scintillation light. Despite that, there is a way to get directional information with a scintillator detector. The idea makes use of the fact that scintillator detectors can measure final particle positions by the relative timing between different phototubes. For $`\overline{\nu }_e+pe^++n`$, the positron is detected nearly at the point of creation. The neutron, however, will be detected (by its capture gamma rays), on average a few centimeters forward of the point of creation. The initially forward motion of the neutron is just a consequence of kinematics. Thus the positron-neutron separation vector points in the incoming antineutrino direction. However, before the neutron can be captured, it must be thermalized. On the first several scatterings (about half of the kinetic energy is lost on each step), the forward motion tends to be preserved. In the remaining scatterings until thermal energy is reached, the scattering is isotropic. In the last phase, the neutron wanders randomly, undergoing further scatterings at thermal energy, until it is captured. The neutron wander has the effect of degrading the significance of the initially forward motion. The average displacement and its fluctuations can be calculated by Monte Carlo simulation . At the energies of reactor antineutrinos, the positron-neutron separation is about 1.5 cm and the statistical fluctuation several cm, depending on the neutron capture time in the scintillator . There is an additional uncertainty in the position due to the gamma-ray localization error. For reactor antineutrinos, the source direction is known, so a measurement of the average positron-neutron separation vector can be used to make a background measurement, since background events will degrade the significance of the separation vector from the expectation. The precision with which the reactor can be located was evaluated by the Chooz collaboration and found to be about $`18^{}`$. Given the small displacement and the large error in localization, this is an impressive measurement. At the higher energies of supernova neutrinos, the neutron kinetic energy is larger and the elastic scattering cross section smaller, so the displacement is larger. The Chooz collaboration estimates that if they were to scale their detector to the size of SK (32 kton), that they would be able to locate a supernova to within about $`9^{}`$, not so much worse than the neutrino-electron scattering result of about $`5^{}`$ in SK. The positron-neutron displacement technique is not possible in SK since neutrons are not detected, and not possible in SNO because the neutron wanders over several meters. ### 2.3 Triangulation Finally, there is the technique of triangulation by arrival-time differences. The idea is very simple – for two detectors, the cosine of the angle $`\theta `$ between the axis connecting the two detectors (with separation $`d`$) and the supernova direction is determined by the arrival-time difference $`\mathrm{\Delta }t`$ by $$\mathrm{cos}\theta =\frac{\mathrm{\Delta }t}{d}.$$ (22) For detectors on opposite sides of the Earth, $`d=40`$ ms, and for SK and SNO, $`d=30`$ ms. Two detectors thus define a cone of allowed directions to the supernova. Since the arrival time difference has an error $`\delta (\mathrm{\Delta }t)`$, the cone will have an angular thickness $$\delta (\mathrm{cos}\theta )=\frac{\delta (\mathrm{\Delta }t)}{d}.$$ (23) The timing error is obviously the crucial point, and in order to have $`\delta (\mathrm{cos}\theta )=0.1`$ (comparable to the neutrino-electron scattering result in SNO), one would need $`\delta (\mathrm{\Delta }t)=3`$ ms for the SK-SNO difference. This is much smaller than the duration of the supernova pulse (10 s) or even the risetime (assumed 100 ms). In order to assess the best that triangulation could do, consider the scenario where SK and the light water portion of SNO are each perfect detectors, identical except for size. (The shape of the event rate in the heavy water in SNO will be different, due to the different reactions, and it is not clear how to correct for that). Then the scattering rates observed in SK and the light water in SNO will have the same underlying distribution, which depends on unknown details of the supernova models. The observed scattering rates in SK and the light water of SNO can then differ only in normalization, statistical fluctuations, and a possible delay. The normalizations matter only in that they determine the scale of the statistical fluctuations. SK will have about $`10^4`$ events and the light water in SNO about 400 events, so the statistical error will be dominated by the timing error in SNO. That is, the underlying distribution will be measured in SK and this template will be used in SNO to test for a delay. In order to make estimates of the timing error, we need the shape of the event rate. We assume a short rise (in the current supernova models, this is somewhere between 0 and 200 ms, so we will very conservatively use 100 ms), followed by a long decay (10 s, as for SN1987A). For the normalized event rate, we take $`f(t,t_0)`$ $`=`$ $`\alpha _1\times {\displaystyle \frac{1}{\tau _1}}\mathrm{exp}\left[+{\displaystyle \frac{(tt_0)}{\tau _1}}\right],t<t_0`$ (24) $`f(t,t_0)`$ $`=`$ $`\alpha _2\times {\displaystyle \frac{1}{\tau _2}}\mathrm{exp}\left[{\displaystyle \frac{(tt_0)}{\tau _2}}\right],t>t_0.`$ (25) where $$\alpha _1=\frac{\tau _1}{\tau _1+\tau _2},\alpha _2=\frac{\tau _2}{\tau _1+\tau _2}.$$ (26) Then $`f(t,t_0)`$ is a normalized probability density function built out of two exponentials, and joined continuously at $`t=t_0`$. In what follows, we assume that this form of $`f(t,t_0)`$ is known to be correct and that $`\tau _1`$ and $`\tau _2`$ are known. The shape of the event rate is shown in Fig. 6. The event rate in SNO will consist of $`N`$ events sampled from $`f(t,t_0)`$, and the event rate in SK will consist of $`N^{}`$ events sampled from $`f(t,t_0^{})`$. Then $`\mathrm{\Delta }t=t_0t_0^{}`$, and $`\delta (\mathrm{\Delta }t)\delta t_0`$, since the SNO error dominates. We consider only the statistical error determined by the number of counts. As noted, we want to determine the minimal error on the triangulation. This model, while simple, contains the essential timescales and an adjustable offset. These considerations lead to a well-posed statistical problem: If $`N`$ events are sampled from a known distribution $`f(t,t_0)`$, how well can $`t_0`$ be determined? The Rao-Cramer theorem provides an answer to this question (with a possible subtlety ). This theorem allows one to calculate the minimum possible variance on the determination of a parameter (here $`t_0`$), by any technique whatsoever. This minimum variance can be achieved when all of the data are used as “efficiently” as possible, which is frequently possible in practice. One requirement of the theorem is that the domain of positive probability must be independent of the parameter to be determined. This condition is obviously not met for a zero risetime, since then the domain is $`(t_0,\mathrm{})`$. For a nonzero risetime, the domain is technically $`(\mathrm{},\mathrm{})`$, independent of $`t_0`$, and so the theorem applies. The minimum possible variance on the determination of $`t_0`$ is: $$\frac{1}{\left(\delta t_0\right)_{\mathrm{min}}^2}=N\times 𝑑tf(t,t_0)\left[\frac{\mathrm{ln}f(t,t_0)}{t_0}\right]^2.$$ (27) This is the general form for an arbitrary parameter $`t_0`$. When $`t_0`$ is a translation parameter, i.e., $`f(t,t_0)`$ depends only on $`tt_0`$, this reduces to $`{\displaystyle \frac{1}{\left(\delta t_0\right)_{\mathrm{min}}^2}}`$ $`=`$ $`N\times {\displaystyle 𝑑tf(t,t_0)\left[\frac{\mathrm{ln}f(t,t_0)}{t}\right]^2}`$ (28) $`=`$ $`N\times {\displaystyle 𝑑t\frac{\left[f(t,t_0)/t\right]^2}{f(t,t_0)}}.`$ (29) For the particular choice of $`f(t,t_0)`$ above, this reduces to $$\frac{1}{\left(\delta t_0\right)_{\mathrm{min}}^2}=N\times \left(\alpha _1/\tau _1^2+\alpha _2/\tau _2^2\right),$$ (30) and so the minimum error is $$\left(\delta t_0\right)_{\mathrm{min}}=\frac{\sqrt{\tau _1\tau _2}}{\sqrt{N}}=\frac{\tau _1}{\sqrt{N_1}}.$$ (31) Note that $`N_1=N(\tau _1/\tau _2)`$ is approximately the number of events in the rising part of the pulse. Since the rise is the sharpest feature in $`f(t,t_0)`$, it is unsurprising that it contains almost all of the information about $`t_0`$. The total number of events $`N`$ is fixed by the supernova binding energy release, so a change in the assumed total duration of the pulse, i.e., $`\tau _2`$, would affect the peak event rate and hence the fraction of events in the leading edge. For a more general $`f(t,t_0)`$, one would replace $`\tau _1/\tau _2`$ by this fraction computed directly. For SNO, $`N_110^2\times 4004`$, so $`\delta (t_0)30\mathrm{ms}/\sqrt{4}15`$ ms. Since SK has about 25 times more events, the corresponding error would be about 3 ms. Therefore, the error on the delay is $`\delta (\mathrm{\Delta }t)15`$ ms and $`\delta (\mathrm{cos}\theta )0.50`$ at one sigma. We have not specified the method for extracting $`t_0`$ and hence $`\mathrm{\Delta }t`$ from the data . That is exactly the point of the Rao-Cramer theorem – that one can determine the minimum possible error without having to try all possible methods. If the risetime were zero, which seems to be unrealistic, then one can show by application of order statistics that the error becomes $$\delta (t_0)=\frac{\tau _2}{N}.$$ (32) Here $`\tau _2/N`$ is simply the spacing between events near the peak. For a more general $`f(t,t_0)`$, but still with a sharp edge at $`t_0`$, one would simply replace $`\tau _2`$ by $`1/f(t_0)`$. In fact, the shape of $`f(t,t_0)`$ is irrelevant except for its effect on the peak rate, i.e., $`f(t_0)`$. So long as $`f(t,t_0)`$ has a sharp edge and the right total duration, allowing a more general time dependence would therefore not change the results significantly. For the two cases, zero and nonzero risetime, we used different mathematical techniques. This may seem like an artificial distinction, and that these two cases do not naturally limit to each other. In particular, it may seem incompatible that in the first case the error $`1/\sqrt{N}`$, while in the second the error $`1/N`$. Further, one obviously cannot take $`\tau _10`$ in the first result to obtain the second. However, it can be shown that the two techniques have disjoint regions of applicability (as a function of $`\tau _1`$) and that the formulas match numerically at the boundary between the two. The boundary is the value of $`\tau _1`$ such that the number of events in the rise is of order 1, i.e., the edge appears sharp for this or smaller $`\tau _1`$. The final results for the pointing errors are given in Table IV. ## 3 Concluding Remarks The next Galactic supernova should be a bonanza for neutrino physics and astrophysics. Should, that is, provided that we can make sense of the data. The difficulty is that there are many unknown aspects of both the particle physics properties of neutrinos and the astrophysics of the expected signal. Since the expected core-collapse supernova rate in the Galaxy is about 3 per century , and since there are no neutrino detectors sensitive to supernovae in distant galaxies, we will not have the luxury of many observations. Of course, there is a worldwide effort to improve the numerical models of core-collapse supernovae . And besides neutrinos, there is one other direct probe of the supernova core, and that is gravitational radiation. LIGO may be sensitive to supernovae out to the Virgo Cluster (at rates of order 1 per year ), and its observations may improve the understanding of stellar collapse. Since there are uncertainties in the supernova models, the $`t`$ test for the $`\nu _\mu `$ and $`\nu _\tau `$ masses was designed to be as model-independent as possible. The resulting sensitivities for either $`\nu _\mu `$ or $`\nu _\tau `$ are in Table III. If the $`\nu _\mu `$ and $`\nu _\tau `$ are maximally mixed with a small mass difference, then the mass test is really on $`m_2=m_3`$, and the limits in Table III improve by a factor of about $`\sqrt{2}`$. There are also many possible signals of supernova neutrino oscillations . Ideally, the current and near-term terrestrial neutrino oscillation experiments can answer some of the key questions before the next supernova. Particularly crucial is the question of whether there are sterile neutrinos (or several flavors thereof). SNO may tell us whether the solar $`\nu _e`$ neutrinos are oscillating into sterile or active flavors (or at all). Similarly for SK , K2K , and MINOS and the atmospheric $`\nu _\mu `$ neutrinos. And MiniBoone may decide whether the LSND anomaly is correct (and hence whether sterile neutrinos are necessary). These questions and more are discussed thoroughly in several recent reviews . Since the neutrinos leave the supernova core a few hours before the light leaves the envelope, it should be possible for the neutrino experiments to give astronomers an advance warning of a Galactic core-collapse supernova, and also to provide some guidance as to where in the sky to look. Neutrino-electron scattering in SK seems to be the best technique, with an error of order $`5^{}`$, depending on distance and the suppression of backgrounds from other reactions. While $`5^{}`$ may sound large to an astronomer, it is about 0.1% of the sky, which will make subsequent searches with small telescopes much easier (the role of amateur astronomers is discussed in a recent article ). Supernova location by triangulation seems to be rather difficult at present (see Table IV). ## ACKNOWLEDGMENTS I am grateful to Gabor Domokos and Susan Kovesi-Domokos for the invitation to a very interesting workshop, and to Petr Vogel for his collaboration . This work was supported by a Sherman Fairchild fellowship at Caltech. ## References
no-problem/9909/quant-ph9909057.html
ar5iv
text
# Entanglement and nonextensive statistics ## Abstract It is presented a generalization of the von Neumann mutual information in the context of Tsallis’ nonextensive statistics. As an example, entanglement between two (two-level) quantum subsystems is discussed. Important changes occur in the generalized mutual information, which measures the degree of entanglement, depending on the entropic index $`q`$. Keywords: quantum information, entanglement, Tsallis statistics. 05.30.Ch; 03.67.-a; 89.70.+c Entropy is undoubtely one of the most important quantities in physics. After more than one century of its introduction, it still generates discussion around its nature and usefulness . Of special importance it was the statistical interpretation of entropy given by Boltzmann, which allowed not only the development of classical statistical mechanics, but also the definition of its quantum mechanical counterpart, known as the von Neumann entropy . The von Neumann entropy associated to a quantum state of a system described by a density operator $`\widehat{\rho }`$ is<sup>1</sup><sup>1</sup>1Boltzmann’s constant $`k`$ has been set equal to one here. $$S=\text{Tr}\left[\widehat{\rho }\mathrm{ln}\widehat{\rho }\right]\text{Tr}\widehat{\rho }=1.$$ (1) The von Neumann entropy does not depend on any of the system’s observables, being a function of the state itself. It is easy to see that if the above mentioned system is in a pure state $`\widehat{\rho }=|\mathrm{\Psi }\mathrm{\Psi }|`$, its entropy vanishes. Moreover, under unitary evolution, entropy remains the same. However for statistical mixtures of pure states we have that $`S>0`$, i.e., classical uncertainties increase the entropy of the state. Recently Tsallis proposed a generalization of Boltzmann’s entropy, which in the quantum mechanical case reads $$S_q=\frac{1\text{Tr}\left[\widehat{\rho }^q\right]}{1q}.$$ (2) The entropic index $`q`$ is a real parameter which is related to the (nonextensive) properties of the relevant physical system. In the limit of $`q1`$, von Neumann’s entropy is recovered. The Tsallis entropy has been succesfully applied to several interesting problems, involving nonextensive systems, which are normally untractable by means of Boltzmann’s statistics. Amongst the problems treated within Tsallis’ formalism we may cite the Lévy superdiffusion and anomalous correlated diffusion , turbulence in 2D pure electron plasma , and in the analysis of the blackbody radiation . There are more convenient values (or ranges of values) for the entropic index $`q`$, depending on the specific system being treated. For instance, in the problem of thermalization in electron-phonon systems, $`q>1`$ , while in the treatment of low dimensional dissipative systems, $`q<1`$ . Entropy also plays a fundamental role in classical information theory as well as in its quantum version. It may be considered as the average amount of information which is missing before observation. This may be quantitavely expressed, for instance, through the Kullback-Leibler measure of information , sometimes called relative entropy. A generalization of this measure and applications within Tsallis’ statistics framework has been recently presented in the literature , although in its classical version only. A discussion on channel capacities in nonextensive statistics may also be found in the literature . Nevertheless there are normally not found in the literature discussions on the implications of generalized statistics in purely quantum mechanical problems, such as, for instance, on the measure of entanglement. A quantity also used to compare distributions as well as quantum states is the mutual information or mutual entropy . The quantum (von Neumann) mutual information $`I`$ relative to two subsystems ($`A`$ and $`B`$) may be written as $$I=S_A+S_BS_{AB},$$ (3) where $`S_A(S_B)`$ is the entropy relative to the subsystem $`A(B)`$, and $`S_{AB}`$ is the entropy of the overall state, described by a density operator $`\widehat{\rho }_{AB}`$. The reduced density operators relative to the subsystems, $`\widehat{\rho }_A`$ and $`\widehat{\rho }_B`$ are obtained from $`\widehat{\rho }_{AB}`$ through the usual partial tracing operation, or $$\widehat{\rho }_A=\text{Tr}_B\widehat{\rho }_{AB},\widehat{\rho }_B=\text{Tr}_A\widehat{\rho }_{AB}.$$ (4) The von Neumann mutual information is then calculated using the quantum entropy in (1), i.e., $`S_i=\text{Tr}\left[\widehat{\rho }_i\mathrm{ln}\widehat{\rho }_i\right]`$. If the joint state $`\widehat{\rho }_{AB}`$ is a pure state, then its von Neumann entropy $`S_{AB}=0`$, and according to the Araki-Lieb inequality $$|S_AS_B|S_{AB}S_A+S_B,$$ (5) we have that $`S_A=S_B`$, which means that the von Neumann mutual information is simply $`I=2S_A`$. In this pure state case, there are no classical uncertainties, and the correlations are purely quantum mechanical. Otherwise, if the joint state is a statistical mixture, $`S_{AB}>0`$, and there will be a mixing of classical and quantum correlations. A convenient property of the von Neumann mutual information is that it is always positive definite. Quantum information theory has experienced a remarkable growth in the past years , mainly motivated by potential applications in communication and computation. In particular, the adequate measure of quantum correlations and entanglement is of central importance in this field. It would be therefore interesting to discuss a measure of correlations such as the mutual information, in a more general context. Here I present a straightforward generalization of the von Neumann mutual information based on Tsallis entropy ($`S_q`$), or $$I_q=S_{qA}+S_{qB}S_{qAB}.$$ (6) This quantity would represent a generalization of the measure of correlations for a wider class of quantum systems (nonextensive). Now I am going to discuss an example involving a pair of two-state subsystems, $`A`$ and $`B`$. The relevevant basis states will be denoted as $`|0_\alpha `$ and $`|1_\alpha `$ ($`\alpha =A,B`$), and $`i_\alpha |j_\alpha =\delta _{ij}`$. Let us assume that the overall system ($`A`$ plus $`B`$) is prepared in a state represented by the following state vector: $$|\mathrm{\Psi }_{AB}=p^{1/2}|0_A|1_B+(1p)^{1/2}|0_B|1_A,$$ (7) with $`0p1`$. If $`p=0.5`$ we have a maximally entangled state, and for $`p=0`$ (or $`p=1`$) we have a disentangled (or product) state. We may even write a more general state, which could include loss of coherence, as $`\widehat{\rho }_{AB}`$ $`=`$ $`p|0_A|1_B1_B|0_A|+(1p)|1_A|0_B0_B|1_A|+`$ (8) $`\gamma ^{1/2}[p(1p)]^{1/2}\left(|0_A|1_B0_B|1_A|+|1_A|0_B1_B|0_A|\right),`$ where the parameter $`\gamma `$ $`(0\gamma 1)`$ determines whether the state $`\widehat{\rho }_{AB}`$ is a pure entangled state ($`\gamma =1`$), or a statistical mixture ($`\gamma =0`$). The partial tracing operation (4) produces the following density operators relative to the subsystems $`A`$ and $`B`$ $$\widehat{\rho }_A=p|0_A0_A|+(1p)|1_A1_A|,$$ (9) and $$\widehat{\rho }_B=(1p)|0_B0_B|+p|1_B1_B|.$$ (10) The generalized mutual information for this particular system will then read $$I_q=\frac{1}{q1}\left[1+\eta _+^q+\eta _{}^q2\left(p^q+(1p)^q\right)\right],$$ (11) where $$\eta _\pm =\frac{1}{2}\left[1\pm \sqrt{1+4p(1p)(1\gamma )}\right].$$ (12) Now I analyze the behaviour of the generalized mutual information as a function of the entropic parameter $`q`$. I shall remark that in the global pure state case ($`\gamma =1`$), the mutual information exactly represents the degree of entanglement between the states belonging to the subsystems $`A`$ and $`B`$, due to the lack of “classical noise”. In Fig. 1 we have a plot of the generalized mutual information $`I_q`$ as a function of $`q`$ for different values of the parameter $`\gamma `$, and with $`p=0.5`$, which corresponds to a maximally entangled state in the case of $`\widehat{\rho }_{AB}`$ being a pure state. For the range of values of $`q`$ here chosen $`(0q2)`$ the generalized mutual information is positive-definite<sup>2</sup><sup>2</sup>2Actually, $`I_q`$ is positive even for larger values of $`q`$.. If $`q=1`$ we have the usual von Neumann mutual information. For $`\gamma =1`$ (pure state), the mutual information decreases monotonically. It attains its maximum value, $`I_q=2`$, at $`q=0`$, and goes assimptotically to zero as $`q`$ increases. However, an interesting behaviour is noticeable for states having a small deviation from a pure state. For instance, if $`\gamma =0.999`$, although the von Neumann mutual information remains basically the same, ($`I2\mathrm{ln}2`$), the generalized mutual information will substantially differ from that of a pure state ($`\gamma =1.0`$) for not so large values of $`q`$ ($`q<0.5`$), as it is seen in Fig. 1. It starts increasing up to a maximum value at $`q0.33`$, then decreasing again. In this case there will be in general two different values of $`q`$ giving the same mutual information. We may also analyze the behaviour of the generalized mutual information for a fixed $`\gamma =1`$ (pure state) for different values of the weight $`p`$. If $`p=1`$ the state is a pure disentangled one, having therefore mutual information $`I_q`$ equal to zero. However, even for a very small “entangled component”, for instance, if we take $`p=0.999`$, the generalized mutual information will assume nonzero values for a range of values of $`q`$. This is shown in Fig. 2. For $`q=1`$, $`I_q0`$, which means that according to von Neumann’s mutual information, the state is viewed as being completely disentangled. For other values of $`q`$, however, the generalized mutual information may be nonzero. This shows an extreme sensitivity of this measure of the degree of entanglement on the entropic index $`q`$, specially when the quantum state is very close of being either a pure state or an entangled state. We conclude that entanglement may arise (or be enhanced), depending on the properties of a given physical system, such as extensivity, which is by its turn quantified by the entropic index $`q`$. The definition for the generalized mutual information here presented ($`I_q`$ in Eq. (6)) is not the only possible one. In fact, in it is proposed, in its classical version, a slightly different form for a generalization of the mutual information. It would be therefore worth comparing our definition of mutual information in (6) with the quantum mechanical counterpart of the one found in reference , which may be written as $`I_q^{}`$ $`=`$ $`S_{qA}+S_{qB}S_{qAB}+(1q)S_{qA}S_{qB}=I_q+(1q)S_{qA}S_{qB}`$ (13) $`=`$ $`{\displaystyle \frac{1}{q1}}\left[1+\eta _+^q+\eta _{}^q2\left(p^q+(1p)^q\right)\left(1p^q(1p)^q\right)^2\right].`$ We note that the definition of the generalized mutual information given above contains an additional “crossed term” $`(1q)S_{qA}S_{qB}`$ relatively to the definition we have already used ($`I_q`$). This might result in some differences, which of course will depend on the values of the parameters $`p`$ and $`\gamma `$. In the limit of $`q1`$, von Neumann’s mutual information is recovered in either case. It would be convenient to perform a graphical comparison. For that I have plotted in Fig. 3 the generalized mutual information $`I_q^{}`$ from Eq.(13) as a function of the entropic index $`q`$ having $`p=0.5`$, analogously to the situation depicted in Fig. 1. Despite of the differences we verify, both definitions of the generalized mutual entropy, $`I_q`$ and $`I_q^{}`$ exhibit a very similar qualitative behaviour. In particular, the important sensitivity discussed above is present in both cases. The same is true in the case in which there is a small “entangled component” (analogous to the situation in Fig. 2). The corresponding plot for $`I_q^{}`$ is shown in Fig. 4, and again, its qualitative behaviour is about the same as the one in Fig. 2. This means that the discussion carried out above is also valid in the case of the alternative generalized mutual information $`I_q^{}`$. In summary, the “crossed term” $`(1q)S_{qA}S_{qB}`$ does not appreciably affect the (qualitative) behaviour of the generalized mutual information for those values of the parameters $`p`$ and $`\gamma `$ which are relevant, at least for what has been discussed here. I have presented a generalization of the quantum mechanical von Neumann’s mutual information within Tsallis’ nonextensive statistics. This observable-independent quantity, here denoted as $`I_q`$, is important for determining the degree of entanglement between different subsystems, for instance. I have found that depending on the value of the entropic index $`q`$ characteristic of Tsallis statistics, the generalized mutual information, which measures quantum correlations may assume very different values from those obtained in the von Neumann case ($`q=1`$). The strong dependence of the mutual information on the entropic index may be of course associated to the extensivity properties of the relevant physical system, and entanglement arises for not so large values of $`q`$. I have also compared two different possible definitions of the generalized mutual information, showing that the sensitivity relatively to the entropic index $`q`$ is basically the same way in both cases. This represents a first attempt of establishing a connection between an intrinsic property of physical systems (extensivity) and the measure of the degree of entanglement between different susbsystems.